Microsoft is releasing a preview of its Azure Quantum computing platform
Microsoft today announced that it’s making Azure Quantum , its platform for quantum computing for developers and companies, available as a limited preview.
At its Build Developer Conference, the company said its offering this platform to select partners and developers in a preview stage. If you’re a selected partner, you will get access to quantum hardware and software from Microsoft, 1QBit, Honeywell, IonQ, and QCI.
You might not be able to access this platform right away, but you can always start learning about quantum computing. Microsoft has already made it’s Quantum Development Kit (QDK) open-sourced for people to educate themselves with basics.
Today, the company has also introduced two courses on the Microsoft Learn platform for you to get started. These courses will walk you through building a program with Q# programming language.
While we might not see applications using quantum computing in everyday life, companies working on these technologies are introducing many initiatives so developers can get a head start on it. In March, Google released an open-sourced version of Tensflow for quantum computers . Last year,
Amazon also started offering quantum computers as-a-service with Amazon Bracket . For now, the service is available in a limited phase and lets you experiment with quantum circuits.
While Microsoft has said it is working on a different kind of topological qubit , it hasn’t publically announced that it has achieved a working qubit yet.
How to build a machine learning model in 10 minutes
I like to divide my machine learning education into two eras:
I spent the first era learning how to build models with tools like scikit-learn and TensorFlow , which was hard and took forever. I spent most of that time feeling insecure about all the things I didn’t know.
The second era–after I kind of knew what I was doing – I spent wondering why building machine learning models was so damn hard. After my insecurity cleared, I took a critical look at the machine learning tools we used today and realized this stuff is a lot harder than it needs to be .
That’s why I think the way we learn machine learning today is about to change .
It’s also why I’m always delighted when I discover a tool that makes model-building fun, intuitive, and friction-less. That couldn’t be more true of Teachable Machine . It’s a free program, build by Google, that lets you train deep learning models right from your browser. It’s so easy that even elementary school kids can use it!
Today, you can train image, pose, and sound models with Teachable Machine.
Under the hood, it uses TensorFlow.js to train a model, which is nice for a couple of reasons:
Training happens locally, in the browser, so none of your data gets sent the cloud. Go data privacy!
When training is done, Teachable Machine lets you export your model so you can use it later, in your own app, from Javascript.
By design, TensorFlow.js uses your GPU (if you have one) to train models, so it’s kinda speedy!
If you want the whole scoop, check out my latest episode of #MakingWithML:
A closer look at the AI Incident Database of machine learning failures
The failures of artificial intelligent systems have become a recurring theme in technology news. Credit scoring algorithms that discriminate against women. Computer vision systems that misclassify dark-skinned people. Recommendation systems that promote violent content. Trending algorithms that amplify fake news.
Most complex software systems fail at some point and need to be updated regularly. We have procedures and tools that help us find and fix these errors. But current AI systems, mostly dominated by machine learning algorithms , are different from traditional software. We are still exploring the implications of applying them to different applications, and protecting them against failure needs new ideas and approaches.
This is the idea behind the AI Incident Database a repository of documented failures of AI systems in the real world. The database aims to make it easier to see past failures and avoid repeating them.
The AIID is sponsored by the Partnership on AI (PAI), an organization that seeks to develop best practices on AI, improve public understanding of the technology, and reduce potential harm AI systems might cause. PAI was founded in 2016 by AI researchers at Apple, Amazon, Google, Facebook, IBM, and Microsoft, but has since expanded to include more than 50 member organizations, many of which are nonprofit.
Past experience in documenting failures
In 2018 the members of PAI were discussing research on an “AI failure taxonomy,” or a way to classify AI failures in a consistent way. But the problem was there was no collection of AI failures to develop the taxonomy. This led to the idea of developing the AI Incident Database.
“I knew about aviation incident and accident databases and committed to building AI’s version of the aviation database during a Partnership on AI meeting,” Sean McGregor, lead technical consultant for the IBM Watson AI XPRIZE, said in written comments to TechTalks . Since then, McGregor has been overseeing the AIID effort and has helped develop the database.
The structure and format of AIID was partly inspired by incident databased in the aviation and computer security industries. The commercial air travel industry has managed to increase flight safety by systematically analyzing and archiving past accidents and incidents within a shared database. Likewise, a shared database of AI incidents can help share knowledge and improve the safety of AI systems deployed in the real world.
Meanwhile, the Common Vulnerabilities and Exposures (CVE), maintained by MITRE Corp, is a good example of a database on software failures across various industries. It has helped shape the vision for AIID as a system that documents failures from AI applications in different fields.
“The goal of the AIID is to prevent intelligent systems from causing harm, or at least reduce their likelihood and severity,” McGregor says.
McGregor points out that the behavior of traditional software is usually well understood, but modern machine learning systems cannot be completely described or exhaustively tested. Machine learning derives its behavior from its training data, and therefore, its behavior has the capacity to change in unintended ways as the underlying data changes over time.
“These factors, combined with deep learning systems capability to enter into the unstructured world we inhabit means malfunctions are more likely, more complicated, and more dangerous,” McGregor says.
Today, we have deep learning systems that can recognize objects and people in images , process audio data, and extract information from millions of text documents, in ways that were impossible with traditional, rule-based software, which expect data to be neatly structured in tabular format. This has enabled applying AI to the physical world, such as self-driving cars , security cameras, hospitals, and voice-enabled assistants. And all these new areas create new vectors for failure.
Documenting AI incidents
Since its founding, AIID has gathered information about more than 1,000 AI incidents from the media and publicly available sources. Fairness issues are the most common AI incidents submitted to AIID, particularly in cases where an intelligent system is being used by governments such as facial recognition programs. “We are also increasingly seeing incidents involving robotics,” McGregor says.
There are hundreds of other incidents that are in the process of being reviewed and added to the AI Incident Database, McGregor. “Unfortunately, I don’t believe we will have a shortage of new incidents,” he says.
Visitors can query the database for incidents based on the source, author, submitter, incident ID, or keywords. For instance, searching for “translation” shows there are 42 reports of AI incidents involving machine translation. You can then further filter the research down based on other criteria.
Putting the AI incident database to use
A consolidated database of incidents involving AI systems can serve various purpose in the research, development, and deployment of AI systems.
For instance, if a product manager is evaluating the addition of an AI-powered recommendation system to an application, she can check 13 reports and 10 incidents in which such systems have caused harm to people. This will help the product manager in setting the right requirements for the feature her team is developing.
Other executives can use the AI Incident Database to make better decisions. For example, risk officers can query the database for the possible damages of employing machine translation systems and develop the right risk mitigation measures.
Engineers can use the database to find out the possible harms their AI systems can cause when deployed in the real world. And researchers can use it as a source for citation in papers on the fairness and safety of AI systems.
Finally, the growing database of incidents can prove to be an important caution to companies implementing AI algorithms in their applications. “Technology companies are famous for their penchant to move quickly without evaluating all potential bad outcomes. When bad outcomes are enumerated and shared, it becomes impossible to proceed in ignorance of harms,” McGregor says.
The AI Incident Database is built on a flexible architecture that will allow the development of various applications for querying the database and obtaining other insights such as key terminology and contributors. In a paper that will be presented at the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21), McGregor has discussed the full details of the architecture. AIID is also an open-source project on GitHub , where the community can help improve and expand its capabilities.
With a solid database in place, McGregor is now working with Partnership on AI to develop a flexible taxonomy for AI incident classification. In the future, the AIID team hopes to expand the system to automate the monitoring of AI incidents.
“The AI community has begun sharing incident records with each other to motivate changes to their products, control procedures, and research programs,” McGregor says. “The site was publicly released in November, so we are just starting to realize the benefits of the system.”
This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .