AI mistakes referee’s bald head for football — hilarity ensued
Top football leagues and teams around the world have TV crews and streaming services at their disposal to broadcast matches to fans across the globe. However, because of the coronavirus pandemic, smaller football teams are relying on AI-powered cameras to cast live matches through YouTube.
While it’s a fantastic and pocket-friendly idea for smaller clubs to show their games, sometimes AI can frustrate everyone. Scottish team Inverness Caledonian Thistle Football Club had deployed such a camera to live-stream their match against Ayr United.
The camera was supposed to track the ball automatically using AI and show footage on YouTube. However, the camera’s AI thought a bald linesman’s head was the ball, and repeatedly kept showing him instead of the ball in play. Fans were not happy about this, and they had to even miss the team’s lone goal because of the glitch.
You can see the highlights from the AI camera to see how the focus kept shifting from the game to the referee. If this happened in any of the higher up leagues, officials would’ve had a field day answering fans and media.
This also goes to show that while ball tracking in different sports is fairly common by now, we can still have glitches likes these. Hopefully, the camera provider will fix this bug and bald sports stars won’t be in focus for the wrong reasons.
h/t: iflscience
IBM announces Call for Code 2020 grand prize winner
IBM today awarded t he 2020 Call for Code grand prize to the creators of Agrolly , an app that helps small farmers threatened by climate change decide what to plant and when.
The distributed team of developers from Brazil, India, Mongolia, and Taiwan, who met at Pace University, will receive $200,000 and support from IBM experts and partners to incubate, test, and deploy their solution. They will also get help from The Linux Foundation to open-source their app so developers across the world can improve and scale the tech.
Agrolly was created to mitigate the damage done to farmers by climate change. This is particularly harmful to smaller farms in emerging countries, as they have l imited access to resources that can help them to adapt.
The app aims to fill this information gap so that small farmers can make better-informed decisions, obtain financing, and boost their economic outcomes.
It works by analyzing weather forecasts alongside crop water requirements to give each farmer tailored information on locations, crop types, and stages of growth. They can use this information to see when the weather conditions are favorable for growing different crops.
These insights can help them to perform climate risk assessments, which are often required to get funding from financial institutions.
The app also provides a forum module for farmers to exchange information and solutions.
Agrolly is currently available as a free app in the Google store . In the future, the team plans to monetize the product by offering a risk solution to banks that provides information they can use to choose which farms to fund.
The team also plans to create a paid service that connects farmers to experts who can help them with their specific needs. But Agrolly CEO Manoela Morais told TNW that the central app will always be free for farmers.
Call for Code was launched by IBM and David Clark Cause in 2018 to develop tech that tackles some of the world’s biggest challenges.
Previous editions of the program have addressed natural disasters, climate change, and COVID-19. The next one will focus on racial justice. IBM will release further information on this new program when it launches at All Things Open on October 19.
You can find more information on the winners and the Call for Code challenge on the IBM website .
Sorry, AI isn’t going to stop the spread of fake news
Disinformation has been used in warfare and military strategy over time . But it is undeniably being intensified by the use of smart technologies and social media. This is because these communication technologies provide a relatively low-cost, low-barrier way to disseminate information basically anywhere.
The million-dollar question then is: Can this technologically produced problem of scale and reach also be solved using technology?
Indeed, the continuous development of new technological solutions, such as artificial intelligence (AI), may provide part of the solution.
Technology companies and social media enterprises are working on the automatic detection of fake news through natural language processing, machine learning and network analysis . The idea is that an algorithm will identify information as “fake news,” and rank it lower to decrease the probability of users encountering it .
Repetition and exposure
From a psychological perspective, repeated exposure to the same piece of information makes it likelier for someone to believe it . When AI detects disinformation and reduces the frequency of its circulation, this can break the cycle of reinforced information consumption patterns.
However, AI detection still remains unreliable. First, current detection is based on the assessment of text (content) and its social network to determine its credibility. Despite determining the origin of the sources and the dissemination pattern of fake news, the fundamental problem lies within how AI verifies the actual nature of the content.
Theoretically speaking, if the amount of training data is sufficient, the AI-backed classification model would be able to interpret whether an article contains fake news or not. Yet the reality is that making such distinctions requires prior political, cultural and social knowledge, or common sense, which natural language processing algorithms still lack.
In addition, fake news can be highly nuanced when it is deliberately altered to “ appear as real news but containing false or manipulative information ,” as a pre-print study shows.
Human-AI partnerships
Classification analysis is also heavily influenced by the theme — AI often differentiates topics, rather than genuinely the content of the issue to determine its authenticity. For example, articles related to COVID-19 are more likely to be labelled as fake news than other topics.
One solution would be to employ people to work alongside AI to verify the authenticity of information. For instance, in 2018, the Lithuanian defence ministry developed an AI program that “ flags disinformation within two minutes of its publication and sends those reports to human specialists for further analysis .”
A similar approach could be taken in Canada by establishing a national special unit or department to combat disinformation, or supporting think tanks, universities and other third parties to research AI solutions for fake news.
Avoiding censorship
Controlling the spread of fake news may, in some instances, be considered censorship and a threat to freedom of speech and expression. Even a human may have a hard time judging whether information is fake or not. And so perhaps the bigger question is: Who and what determine the definition of fake news? How do we ensure that AI filters will not drag us into the false positive trap, and incorrectly label information as fake because of its associated data?
An AI system for identifying fake news may have sinister applications. Authoritarian governments, for example, may use AI as an excuse to justify the removal of any articles or to prosecute individuals not in favour of the authorities. And so, any deployment of AI — and any relevant laws or measurements that emerge from its application — will require a transparent system with a third party to monitor it.
Future challenges remain as disinformation — especially when associated with foreign intervention — is an ongoing issue. An algorithm invented today may not be able to detect future fake news.
For example, deep fakes — which are “ highly realistic and difficult-to-detect digital manipulation of audio or video ” — are likely to play a bigger role in future information warfare. And disinformation spread via messaging apps such as WhatsApp and Signal are becoming more difficult to track and intercept because of end-to-end encryption.
A recent study showed that 50 per cent of the Canadian respondents received fake news through private messaging apps regularly . Regulating this would require striking a balance between privacy, individual security and the clampdown of disinformation.
While it is definitely worth allocating resources to combating disinformation using AI, caution and transparency are necessary given the potential ramifications. New technological solutions, unfortunately, may not be a silver bullet.
Article by Sze-Fung Lee , Research Assistant, Department of Information Studies, McGill University and Benjamin C. M. Fung , Professor and Canada Research Chair in Data Mining for Cybersecurity, McGill University
This article is republished from The Conversation under a Creative Commons license. Read the original article .