Since elementary school I dreamed of building robots. They were cool, but I also wanted to use them to get out of doing my chores. I went to MIT, winning third place in the 6.270 robotics competition with my buddies.
Afterwards, I studied commonsense reasoning at Stanford. Professor John McCarthy was my doctoral advisor, and the grandfather I never had. We had many conversations about human-level AI.
While John coined the term “Artificial Intelligence,” I proposed “Creeping AI.” I described it as the process by which our tools, machines and computers, slowly take over our executive functions. We don’t remember phone numbers anymore because they are in our phones. Our emails and search queries are automatically spell-checked, and now our grammar is fixed as well. And look, cars can drive themselves.
Now we have ChatGPT which passed the Turing test in May. This means that when you use ChatGPT, you can’t tell if it is human or not. This has wide ranging implications economically. As a mother, I worry about what jobs our kids will have in the future. What will they do for a living when an AI can write marketing copy, build a website, reconcile account books, and even diagnose cancer (and better than us)?
It’s essential to remember that AI is simply a tool. If you believe it is smarter than you, then it will be. It is like social media — if you think that social media is your means of connection to people, it will become so. So it is important to treat it as an extension and not let it rule over you (just like social media).
AI is very powerful, but remember its weaknesses, similar to human ones:
AI can confidently make statements that are not true, just like we do due to ego or ignorance. Techies call this “hallucination.”
AI needs a massive amount of data to work. It is only as good as its training data. So if the data leans one way, or misses something, the AI will as well.
AI can only make inferences on data that is already out there. It won’t create an entirely new idea (yet). In fact, many of the recent AIs are trained on a copy of the internet.
This means it is even more important that our kids know their base facts and how to critically apply these facts to draw conclusions. They must be well-read and understand the world, people, and different viewpoints. Because the smarter these AIs get, the harder it will be to fact-check them. So it is extremely important to always treat them as tools. As John said in his short story “The robot and the baby”: “Never ask an AI system what to do. Ask it to tell you the consequences of the different things you might do.”
One salient question is how to handle kids using AI to do their homework. This has always been an issue with any kind of technology: calculators, internet content, buying essays online… We figured it out before and we’ll do it again. Probably teachers will have to augment written offline work with oral exams to ensure the student’s mastery. Sam Altman, the founder of OpenAI, has proposed regulation. Maybe we require AIs to divulge what content they have been asked to produce.
A better question is how to use AI to help our kids learn better. The best way to learn something is to write it down. Instead of using ChatGPT to write an essay, you ask it how to improve yours.
Also, how many times have you understood a math problem totally differently from how it is taught? You can ask ChatGPT to explain those different ways.
Thus we can use AI to augment the learning paths of kids with neurodivergent learning styles. All while preserving ourselves as people of libraries, letters, and the genuine written word.
Aarati Parmar Martino is a software engineer at Google. She is a school board candidate for Central Bucks School District.