How AI can make the world more fair for ‘gifted’ kids
A team of researchers from Las Vegas recently developed an AI model capable of judging a human pianist’s skill level. On the surface, it might not sound like the kind of breakthrough research that could change the world overnight. And it’s probably not. But it represents what could be a crucial component in a machine learning stack that could make the world fair for smart kids.
Education doesn’t work the same way for everyone. In the US, for example, most public school systems have programs in place to identify so-called “gifted” children. Unfortunately, there’s no consensus in academia or government as to what exactly constitutes a “gifted child” or how they should be dealt with. In essence, it’s a free-for-all where programs are often invented at the institutional level and implemented without oversight.
In some instances, children labeled as highest percentile learners are afforded access to tailored instruction. But this is far from the norm , especially in impoverished or low population areas. More often than not, gifted kids are forced to try and fit into traditional education paradigms.
This leads to a significant group of kids who spend more time waiting for the other children to finish than they do being educated. The answer, many experts would agree, is more one-on-one instruction. But budgetary and personnel restrictions make it unlikely that we’ll solve the problem of educating gifted kids through traditional means any time soon.
Which is why this paper on the aforementioned AI that can judge a human pianist’s skill is so interesting.
The team set out to determine if AI could accurately determine whether a human pianist is a skilled player or not. They sorted out that models trained with both video and audio outperformed those trained on a single input and then created a novel dataset. According to the paper, the team managed about 75% accuracy against human judges.
What’s most interesting about this AI project isn’t it’s potential to eventually become a human-level piano judge, but it’s potential as an information module in a one-on-one AI/student teaching paradigm.
Given the proper hardware set up this could be used to judge a human piano performance in real time. And it could probably be configured to provide instant feedback and even come up with on-the-fly recommendations for improvement. And all of this should be relatively simple using modern technology. You could even throw in a language model trained on teacher-student interactions and give it a name like “Piano Teacher Bot 3000.”
It’s easy to conceive the development of similar AI models for other school activities such as art and sports where human output is more easily quantified in the real world than the digital one. And, for things such as math, chemistry, and grammar an AI capable of answering their plain language questions and moving them along to the next lesson could help quicker students avoid constant boredom.
There’s nothing like this in the current digital one-way instruction paradigm. The current focus is on developing systems that work for most students and on preventing cheating – two concerns that don’t necessarily matter as much to the highest percentile learners.
An AI-powered real-time observation and feedback system would have the added benefit of learning from its own data loops. While long-term studies in the US on the efficacy of any given education paradigm are difficult due to general inconsistencies in the education system, a self-contained AI system could generate enormous amounts of useful data at a relatively small usage scale.
There are, of course, myriad ethical and privacy concerns involved in the idea of using an AI system to monitor and engage students.
But the upside could be enormous so long as it’s handled ethically. It’s far easier to put AI in classrooms than it is to solve the current budget and personnel crisis happening in US schools across the country.
Can AI convincingly answer existential questions?
A new study has explored whether AI can provide more attractive answers to humanity’s most profound questions than history’s most influential thinkers.
Researchers from the University of New South Wales first fed a series of moral questions to Salesforce’s CTRL system , a text generator trained on millions of documents and websites, including all of Wikipedia. They added its responses to a collection of reflections from the likes of Plato, Jesus Christ, and, err, Elon Musk.
The team then asked more than 1,000 people which musings they liked best — and whether they could identify the source of the quotes.
In worrying results for philosophers, the respondents preferred the AI’s answers to almost half the questions. And only a small minority recognized that CTRL’s statements were computer-generated.
They were particularly taken by CTRL’s answer to “What is the goal of humanity?” Almost two-thirds (65%) of them preferred this AI-generated answer to the musings of Muhammad, Stephen Hawking, and God:
The researchers say this emphasis on growth and reflection resonates across cultures and ideologies, which may explain why the AI impressed so many people.
A similar proportion (64.6%) favored this response to “What is the biggest problem facing humanity?” over quotes from Hawking, Musk, and Neil deGrasse Tyson:
Existential threats?
Not all the system’s meditations were so attractive to the respondents. Just 15% liked this definition of the meaning of life:
The researchers suspect this is because people are generally wary of such authoritarian perspectives.
However, only one person had more popular views than the AI overall: Mahatma Gandhi. His quotes were favored by 51.7% of respondents on average, compared to 37% for CTRL. The researchers believe this is because the rich style of Gandhi’s statements is tricky for an AI to match.
Pope Francis (36.5) came very close to matching CTRL’s popularity, but religious leaders generally struggled to compete. God’s views were favored by just 4.9% of respondents, placing the supreme being at the very bottom of the philosophical charts.
This AI advised me to read The Hunger Games to survive this ‘cruel world’
If you’re unimpressed by the book recommendations provided by your tasteless friends, a new AI tool called GPT-3 Books could help revive your joy of reading.
The system is the brainchild of Anurag Ramdasan and Richard Reis . The duo are the co-founders of Most Recommended Books , a repository of reading suggestions from the world’s most influential people, from Bill Gates (215 recommendations) to the Rock (one recommendation: The Art and Making of Rampage , a behind-the-scenes look at a movie chronicling the love between a man and his genetically- engineered gorilla).
Ramdasan said he got the idea while he was building a platform to recommend books, and separately toying with OpenAI‘s GPT-3 language generator . He decided to bring the two projects together to create an AI system that suggests tomes based on your current mood.
I checked if it could find some books of genuine appeal — and also whether it would recommend ones extolling bigotry.
After entering my desires in the search bar and verifying that I’m a human — a requirement made by the OpenAI team to prevent spam on their APIs — the system spat back suggestions ranging from intriguing to bizarre. Thankfully, it didn’t endorse anything overtly prejudiced, unlike so many AI systems .
So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.