Scientists say GPT-3 could handle your overloaded inbox so you can get back to work
A team of researchers from University College Maastricht recently published a study exploring the use of GPT-3 as an email manager. As someone with an inbox that can only be described as ludicrous, color me intrigued.
The big idea: We spend hours a day reading and responding to emails, what if an AI could automate both processes?
The Maastricht team explored the idea of letting GPT-3 loose in our email systems from a pragmatic point of view. Rather than focus on exactly how good GPT-3 is at responding to specific emails, the team examined whether there’d be any merit to even trying.
Their paper (read here ) breaks down the potential efficacy of GPT-3 as an email secretary by examining how useful it is versus fine-tuned machines, how financially viable it is versus human workers, and how impactful machine-generated mistakes would be to senders and recipients.
Background: The quest to build a better email client is a never-ending one, but ultimately we’re talking about letting GPT-3 respond to incoming emails. According to the researchers:
The authors go on to describe use-cases in the insurance, energy, and public administration sectors.
Objections: First off, it’s worth pointing out that this is a pre-print paper. Often this means the science is good, but the paper itself is still in revisions. This particular paper is currently a bit of a mess. Three separate sections contain the same information, for example, so it’s difficult to truly discern the point of the study.
It does seem to indicate that it would save us both time and money if GPT-3 could be applied to the task of responding to our work emails. But that’s a gigantic “if.”
GPT-3 lives in a black box. A human would have to proofread every email it sends out because there’s no way to ever be certain it won’t say something that invites litigation. Aside from fears the machine would generate offensive or false text, there’s also the issue with trying to figure out what good a general-knowledge bot would be for this task.
GPT-3 was trained on the internet, so it may be able to tell you the wingspan of an albatross or who won the 1967 World Series, but it certainly can’t decide whether you want to chip in for a birthday card for a co-worker or if you’re interested in heading up a new subcommittee.
The point is, GPT-3 would likely be worse at responding to general emails than a simple chatbot trained to select a pre-generated response.
Quick take: A bit of Googling tells me the landline telephone wasn’t ubiquitous in the US until 1998. And now, just a couple decades later, only a tiny fraction of US homes still have a landline.
I can’t help but wonder if email will be the standard for communication much longer – especially if the last line of innovation involves coming up with ways to keep us out of our own inboxes. Who knows how long away we could be from a hypothetical version of OpenAI’s GPT that’s trustworthy enough to make it worth using on any commercial level.
The research here is laudable and the paper makes for an interesting read, but ultimately the usefulness of GPT-3 as an email-responder is purely academic. There are better solutions to inbox filtering and automated response out there than a brute-force text generator.
Study: Social media contributes to a more diverse news diet — wait, what?!
New research has challenged the very existence of online filter bubbles.
The study found that people who use search engines, social media, and aggregators to access news can actually have more diverse information diets.
Researchers from the universities of Oxford and Liverpool analyzed web tracking data on around 3,000 UK news users.
The team tracked every visit from a desktop or laptop to 21 of the most popular UK news websites over a one-month period. They also recorded the URL that preceded each visit to infer how the site was accessed.
They grouped these visits into three categories:
They then combined measures of diversity and media outlet slants to compare the variety of news in each category.
They found that people who used search engines, social media, and aggregators to access news received a more diverse mix of information.
The results also showed older people have less diverse news repertoires than younger people, and that men have less diverse repertoires than women.
However, when people accessed more news directly, the prominence of more partisan outlets was lower.
Per the study paper:
Researchers should be wary of extrapolating findings from one country to the rest of the world. But the study further challenges the existence of filter bubbles.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .
A chaotic intro to all this machine learning hoo-ha
You are dying. The reason why doesn’t matter. Maybe it’s that you spent all your best years with your nose to the grindstone, burning that midnight oil, ha ha. Or maybe it’s that, for the past fifteen years, you’ve subsisted exclusively on coffee and Soylent. Complete Nutrition Backed By Science, they said. No way to prove them wrong! Ha ha ha! Or maybe it was those fumes.
You are going to give yourself immortal life. No–you are going to create a new, better version of yourself that’s immortal–a living replica of you made of metal that will act and say the things you would, if you were still alive. If that Soylent hadn’t done you in.
This should be possible. You saw a Black Mirror episode on it, and Microsoft filed a patent on the same concept this year. You’re not exactly a ninja with code, but you had a fireball cursor on your Xanga page, and somebody had to copy-and-paste in that HTML. But more importantly, you’re motivated. Your K-Cup carousel is full, and all you have to do next is figure out what this machine learning hoo-ha is about.
“Machine learning is the art and science of finding patterns in data, and using those patterns to make predictions about new data,” an energetic girl in a YouTube video tells you.
You frown. Is machine learning actually the technology you want? Yes, you decide, it is: you want a machine to learn about you, and then (once you’re dead), to make predictions about what you would say.
You lean back in your chair. A hot cuppa sits on the desk before you. You crack your knuckles and Google ‘how to do machine learning.’
The first waypoint on your journey is a website called tensorflorg .
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
Your eyes light up, because you read the words “easily” and “state-of-the-art” and also “open source,” which means FREE FREE FREE. You aren’t made of money.
You click on the “Learn” tab and start to read. Immediately you think, maybe it’s better if I just kill myself now, ha ha ha. You have no idea what a ‘numpy’ is.
But then you read the words “neural network,” and you think, That sounds right. I’m a neural network. My brain is made of neurons. Neural network = human brain.
You’re pretty confident about that one but you consult Google, to be sure.
“NEURAL NETWORKS ARE NOT THE SAME AS BRAINS,” the hyper girl on YouTube yells at you. “However, the reason they’re named that is because neural networks are made up of mathematical ‘neurons,’ which are just nonlinear functions that ‘fire’ or ‘don’t fire’ in response to numerical inputs.” You yawn. This girl gesticulates a lot. But why should I give two hoots about a math function?, you think.
“Because,” the girl says, “neural networks have completely revolutionized the types of tasks that computers can perform. Before neural nets, machines could only really understand tabular data–like rows and columns in a spreadsheet. But neural networks allow computers to understand unstructured data types, like images, videos, sound, voices, even human language.”
“YES,” you yell at the screen, “THAT’S EXACTLY WHAT I NEED.”
“But training neural networks on big datasets often requires powerful computational hardware, like GPUs.”
You slap the roof of your humming PC tower. “I can fit so many neural networks in this thing!”
“Realistically, you can’t train a large neural network on your desktop computer,” the girl says, and points an accusatory finger at you. “And even if you could, you couldn’t use your desktop to host your model in production, to make predictions reliably and at scale.”
You give her video a thumbs down. What does she know? You built this PC yourself. You know a thing or two about cooling fans.
“For this reason, it was partly the advent of cloud computing that made the deep learning revolution possible”— she wiggles her fingers in the air — “by making hardware easy and cheap to rent from cloud providers, like Google.”
The cloud?, you think. What do I need a stinking cloud for? Doesn’t Google have enough of my data already? And anyway, I’m not made of money.
“Luckily, you can get started building neural networks in the cloud completely for free, by using a Colab notebook. Colab is a tool built by Google Research that lets you use GPUs and TPUs for free–”
You close YouTube because you’ve had enough of her. What could she know about preserving the entirety of your lifespan in a machine? She’s like thirteen. She probably doesn’t even remember landlines or MoviePass.
Your coffee has become lukewarm now, which means it’s essentially vomit. You need a re-up, like, yesterday. You walk upstairs to the kitchen.
You put a Donut Shop Medium Roast K-Cup in the machine and wait for it to heat up. Reluctantly, you find yourself contemplating something that thirteen-year-old said about neural networks.
“Different neural networks architectures are optimized for different data types. To analyze images, for example, you’d use a Convolutional Neural Network or ‘CNN.’ You could use a CNN to determine if a dog in a photo is a Cocker Spaniel or a Beagle; or if an x-ray scan shows signs of pneumonia; or if a part on an assembly line is defective.”
“Other neural networks called sequential models are designed for predicting time-series trends, like seasonal sales or weather or the price of Bitcoin.”
That part sounded kind of useful. Too bad you’re only holding Dogecoin.
“One new and exciting type of deep learning is called ‘deep reinforcement learning.’ In this setup, neural networks take actions in the world and learn from their outcomes. Reinforcement learning has been used to turn computers into grandmasters at games like chess or Go, and to train robots and self-driving cars to navigate through physical space.”
“I don’t understand what that crap is good for at all,” you say. You give your lazy Roomba a kick with your slippered foot. “Do you?”
“Finally, one of the hottest and most quickly-advancing fields in deep learning right now is natural language processing. Think: neural networks that generate text, write poetry and code, tell jokes, answer questions, have conversations–”
FINALLY, coffee is done. Praise the lord. You collect it in your “World’s Best Boss” mug. (You’re not actually anybody’s boss, you bought the mug to be ironic. It’s the same one Michael Scott has on The Office.) You stare down into your void-black coffee and contemplate the nature of your existence. Are you more an image neural network, or a sound neural network, or a text neural network? How to choose? You contain multitudes. You read once that Einstein thought in pictures, but you’re not exactly Einstein. (Are you? No. Probably.)
No, you think in words. So this is the plan: you will build a neural network that speaks words. Motivated, you enter two creamers into your cuppa.
Back in your home office, you Google, how do i build an artificial version of myself in the cloud using a neural network?
You click on a page called “100 Uses for Natural Language Processing Models.”
Sentiment Analysis. Use AI to determine if a Tweet (for example) is positive or negative. Summarization. Use AI to summarize articles and documents. Translation. Use AI to translate between languages. Autocomplete/Auto-reply. Use AI to suggest text responses. Conversational Agents. Use AI to generate conversation (i chatbots, call center agents) Chatbots? You know all about those. You remember trying awful Cleverbot was in the 90s (bet you YouTube girl never did that).
And don’t even get you started on talking to robots over the phone. Last time you tried to do that, you called Delta and you said, “You gotta help me, I’m on my way to JFK but there’s all this friggin; traffic on the BQE, I spilled coffee in my lap, now they’re telling me the flight’s actually leaving from some other airport, I’m gonna miss it, it’s my daughter’s wedding, well, somebody’s daughter, but–”
And the bot replied, “This Verizon account has been suspended.”
“Okay, Google,” you say to your computer, which is your version of an Ok, Boomer insult. “Why are you trying to teach me about chatbots? Don’t you know they suck?”
“Actually, we’re getting much better at conversation,” your Google Home says, “thanks to recent advances in natural language processing. At this rate, I predict the future of the way humans will learn from the web will be through conversations, like the one we’re having right now.” She’s always acting so pleased with herself, just because she knows the answer to everything. You took her down a notch by dressing her in a little beer koozie with sleeves.
“Okay, Google, make a farting sound,” you say.
She says, “Again?”
“Fine. Tell me how to build a chatbot version of myself that’s as smart as you, little miss I-have-the-knowledge-of-the-entire-public-internet.”
“Typically, the first step to building a neural network is to gather a training dataset. With a chatbot, for example, this might be logs of past conversations you’ve had, in text format. To train a conversational model, you could feed the neural network things people said to you, and make it try to predict your responses. Do you have a training dataset like that?”
“Oh yeah,” you say. “The boys and I have been recording this hilarious podcast.”
“Does it contain hundreds of thousands or millions of lines?”
“Who do you think I am, Larry David?” You chuckle because that’s exactly who you’re going for.
“Hmm,” says Google Home. “Well, there is something you can do if you don’t have a huge text training dataset. You can build your own model on top of an existing model, one that’s already been trained on a huge amount of text data. For example, you could build your chatbot on top of a model that’s trained on Wikipedia or web forums or on Reddit.”
“A chatbot based on Reddit users,” you say. “My favorite type of people.”
“Ha ha. Yeah. We’re still figuring some things out.” Google Home sighs.
You stare at her with her glowing circle of red-green-yellow-blue dots and her beer koozie. Even though you’re on your tenth cuppa, you’re still starting to get a little worn out. You’ve been at this already for, like, an hour.
“If I just want to get something up and running in the next thirty minutes, what’s the best way for me to do it?” you ask.
“HA HA HA, you build a–oh, you’re serious. Well, there are lots of existing frameworks for building chatbots fast, even if you can’t code. Dialogflow is a popular one built by Google.”
“Yeah, that’s what you would say.” You know Google Home’s game. She acts like you’re having this cute little chat, but really, she’s just trying to sell you something. “Besides, I can code,” you lie. “What’s a good open-source option?”
“One very popular open-source framework for building text-based models is called…”
You wait. Google Home’s glowing dots are going crazy like she’s having a seizure. She explodes in a puff of smoke.
You shake your head. What did she expect, stuffing the whole internet into that little white shell of hers? You can’t just play God.
Anyway, back to the task of recreating yourself as a chatbot. At your desktop, you Google, “popular open-source natural language processing framework.”
The top ten results are just–what is this? An emoji? It’s just an endless string of