I Went to an AI Summit. What I Found Really Worried Me.
By Teachy McTeach
I am an English teacher at a private high school in San Francisco, publishing anonymously (thanks, BAS).
As anyone connected to education can tell you, AI has thrown a serious wrench in existing systems. Platforms like Grammarly, ChatGPT, Dall E, and Apple Intelligence have completely changed the game for students- and teachers. AI is by its nature virtually impossible to pin down, evaluate for, and avoid.
The whole education system needs an innovation overhaul, for sure. But administration and government move glacially, and no one is providing answers from up the chain. Meanwhile, this technology develops really, really fast. So we we as educators need to figure out what to do with all the AI being thrown at us NOW.
My school generously offered me a ticket to the recent three-day GenAI Summit San Francisco. I took it. I had just spent weeks wading through essays partially and completely written by ChatGPT. School was out. I needed a change. Maybe I could learn from people on the cutting edge! See a less disenchanted future through their eyes!
Not so much.
I missed the first day, but from what my colleagues told me, and largely as described in Chris Knight’s Grit Daily article, it was a poop storm. People were waved in at the door to the Palace of Fine Arts as a handful of humans tried to make sense of reams of paper tickets and thousands of color coded necklaces while iPads malfunctioned and wifi worked poorly. Once inside, Keynote speeches shifted to Q&As after AV dysfunctions. Panel discussions were straight up product pitches. My colleague started calling the event “Fyre Summit.”
Even when I arrived on day two, the schedule was half blank, with “topics to be announced shortly” for keynotes, panels, and workshops. It was as if the presenters had waited until the last minute in order to have up-to-date information, and then their AI helpers had failed to deliver.
The first talk I went to was a seminar on “Education Under AI.” The speaker worked for a small Palo Alto school called Leadways that has integrated AI into their curriculum. She made some good points about shifting teaching to critical analysis, assessment of information, with self-management and resilience in learning as key. But, their tuition for high school students is almost $60,000 and, according to their admissions chatbot, they enrolled about 50 students in 6th through 12th grade this year. So… grains of salt.
Regardless, focus on resilience and need for ethical awareness seems apparent, but how do you teach that to a bunch of overworked teenagers being told they need to be exceptional in five directions to have hope of going to any college at all?
The next talk I attended was a keynote by Stanford professor Johannes Eichstaedt on “Navigating and Predicting the Impact of AI on Society.” He did a great job talking us through technological optimism and pessimism, and grounding perspectives in specific questions connected to various branches of psychology. In the end, his main message seemed to be “we sure wish we had collected data ten years ago, so we could compare things like sleep, well-being, mental health, social relationships, and other key psychological domains.” He said it is not too late, but who is measuring these things now? Who is funding these studies?
You could smell the money in those gaping halls…or at least the desire for it. And, as much as I appreciated Eichstaedt’s talk, most presenters were completely skipping the conversation around Why and What This All Means for People.
A panel discussion on “AI and the Future of Work” was just an ad for the panelists’ projects, including a charming but incredibly creepy younger guy who worked for a Character AI (in other words, fake people). Overall, this group foresaw workers shifting into modifiers of AI rather than creators of content, though “VIP users” – aka those with money – would still have human helpers. While humans as modifiers of AI for the majority could be reasonable, depending on the end context, this means that human interaction itself becomes the commodity. Already in my classrooms I am seeing an unprecedented amount of anxiety and depression in teenagers. Pew and the CDC report that suicide in general, but particularly among adolescents, has raised significantly since 2011. It is difficult to imagine substitutions for human interaction helping these numbers.
At the summit, there were booths with companies that promised to sift through data for you, personal assistant software that would balance your inboxes and improve workflow, and all kinds of startups guaranteed to streamline your needs. There was a huge line for the giant robot making $8 coffee. Another for one that would draw your portrait. (My friend almost had hers stolen by another woman in an eerie “that’s not you, it’s me” portrait-as-identity-theft scene that had to be de-escalated by the robot’s keeper.) In another area was a collection of AI art, much of it breathing inside of screen-frames, or morphing from one image to another. I listened to a computer generated radio show. A flesh and bone guy strangely played a steel drum.
Back on the presentation stages, in between talks, the stage was cleared and audience distracted by emcees filling space, They were beautiful women with impressive bios (executives at tech firms, PhD candidates, engineers), all wearing ball gowns. The men they were presenting? Some were in shorts and mismatched shoes. Hoodies and jeans. Dress shoes with no socks.
Holy misogyny, Batman.
At the end of a panel discussion on “AI-Generated Copyright” – which I foolishly assumed would be about artist ownership of work used by AI, but was of course about how to monetize the works produced by AI – a young woman was first in a long line for the question microphone. Before she could speak, one dude after another cut her off, calling out super personalized questions and follow ups from the audience, or literally grabbing the microphone out of her hand. Her question – which she only got to ask because of a wave of pissed off women insisted she have her turn – was, refreshingly, about compensation of human artists for work that was being used and reused. It, too, was cut off by another man who desperately wanted to insert his knowledge about the Metallica/Napster copyright infringement case of 2000. I knew this was an aspect of the tech world, but to see it so blatantly in action was frankly shocking. Especially in those who are supposedly “igniting the AI revolution in the heart of San Francisco.”
A later panel on “Building the Future Classroom” started out pretty well. Panelist Bryan Talebi (CEO if Ahura AI) emphasized that AI should openly be used for students only over 18 years old. Dr. Yao Du (Asst Professor of Speech-Language Pathology at USC) underlined the importance of supervised learning, particularly when concerning political information (tune in through November for that bleak sequel). Derrick Gong (Founder of CoursePals.ai) emphasized that AI should reinforce learning, that humans are the first educators.
But, here’s the thing, all of this requires control over the input young students have access to, which in the late-stage capitalist tech universe is simply impossible. Oh, but don’t worry about that. Talebi invited everyone to continue the discussion with him at a party at The Modernist from 11PM to 2AM, and our emcees reminded us that Happy Hour had started at the bar, and there were beers and cans of sake on the house! Are you enjoying your free drinks?
Look, I’m not against AI. But my fear is that AI will take over certain corners of independent thought, and create a greater divide between those who are motivated to write, think critically, and make art, and those who aren’t. Because, from what this “Summit” showed me, no one is doing anything concrete to anticipate and work with the potential effects of this technology. I want to see the good, I really do. And I believe there is a way to use tech to individualize learning or enable innovation in a way that is badly needed in my domain, but so far I am seeing tech moving very very fast and making some people lots and lots of money, while responsibility and human contact are becoming less and less important. People are being left out, and important discussions about equity, integrity, and intent are being left behind. Maybe I’m not having the right conversations. Please, prove me wrong.
To finish the last day, my friends/coworkers and I brought a few Anchor Steams up to a second floor wasteland strewn with macaroni and shrimp tails in order to get a better final view of the swarming networking dance below. This was clearly the real purpose of the summit.
Exchanging cards and getting some face-time so as to connect on New Ideas and Collaboration for Success. Even the supposed titans of AI seem to know that the best way to be remembered is to shake a hand, tell a joke, make a connection.
I guess there’s just something about interacting directly with a real human being.
Howdy! My name is Katy Atchison and I'm an Associate Editor for Broke-Ass Stuart.
I want to take the time to say thank you for supporting independent news media by reading BrokeAssstuart.com. Supporting independent news sources like Broke-Ass Stuart is vital to supporting our community because it amplifies the voices of a wide variety of diverse opinions. You also help support small businesses and local artists by sharing stories from Broke-Ass Stuart.
Because you're one of our supporters, I wanted to send over a pro-tip.
Our bi-weekly newsletter is a great way to get round ups of Broke-Ass Stuart stories, learn about new businesses in The Bay Area, find out about fun local events and be first in line for giveaways.
If you’d like to get our newsletter, signup right here, it takes 5 seconds.