Skip to main content

Well, welcome to the future, again. If you’ve been on this blog the last days, you know the drill: Big Tech CEOs, rogue theorists, designers, rappers-turned-innovators, and even relationship gurus are all riffing on artificial intelligence and quantum computing like it’s the setup to a cosmic joke here at SXSW 2025. The punchline? We’re simultaneously promised an AI utopia and warned of a sci-fi apocalypse, often by the same people. It’s as if technology’s hype train is pulling a gravity-defying stunt: speeding ahead (and taking off) while its conductors yell “Slow down, you’re going to miss the scenery!”

In this satirically insightful tour, we’ll drop in on what Qualcomm, IBM’s brain trust (with a cameo from will.i.am), Douglas Rushkoff, John Maeda, Peter Voss, Esther Perel, Adeel Akhtar (yes, even a BAFTA-winning actor has thoughts), Melanie Rieback, Arvind Krishna, Michael Biercuk, Charina Chou, and Pedro Rivero are really saying about AI, “agentic AI,” and quantum computing. Spoiler: It’s not all flying cars and sentient toasters,  there are genuine breakthroughs, ethical oopsies, and society-shaking questions buried under the buzzwords. So grab a quinoa kombucha latte (this is SXSW, after all) and let’s separate the signal from the noise.

At SXSW, the hype around AI is as thick as Texas barbecue smoke, and sometimes just as liable to give you heartburn. Take Qualcomm’s splashy announcement with Black Eyed Peas frontman will.i.am: they proudly declared “AI is the new UI”, envisioning a future where we ditch menus and buttons for chatting with omnipresent AI assistants. To prove it, will.i.am demoed FYI.AI, a flashy new AI-powered personal assistant that’s supposed to organize your life and even text your friends with some personality​. What could go wrong? Plenty, it turns out. In a twist worthy of Silicon Valley satire, the demo unveiled AI “personas” with so much manufactured sass and cultural flavor that they veered into “digital blackface” territory​. (Yes, the future of messaging apparently includes cringey caricatures of real communities.) Will.i.am claimed these AIs would have “the flavor and energy” of real people​,  and indeed they did, just not in the way anyone hoped. The room went from intrigued to “Did that AI just say that?” in record time. It was a pointed lesson: relatable AI teeters on a fine line between charming and deeply awkward. Hype met reality, and reality side-eyed hype like, “Really, bro?”

To Qualcomm’s credit, the vision isn’t pure snake oil. The idea of AI as the interface – talking to our gadgets instead of poking at screens – has merit. Even design guru John Maeda notes that language is effectively “the oldest interface” we have for getting things done​.  Voice assistants and chatbots are evolving quickly. But Maeda also highlights a comedic contradiction in our AI dreams: One minute we want AI to build us a ridiculously complex website from a two-word prompt; the next minute we beg it to simplify that site into one button.We over-hydrate the idea, and then we desiccate it,” he wisecracked of our manic approach to AI-powered design​. In other words, we can’t decide if we want AI to do absolutely everything or make everything absolutely nothing. (Pro tip: maybe aim somewhere in between?)

Meanwhile, IBM’s CEO Arvind Krishna took the SXSW stage like a sober adult arriving at a teenagers’ house party. His message: cool it with the sci-fi fantasies and get to work. “When it comes to AI and quantum computing, it’s time to move past the sci-fi hype and start building real-world applications that expand what is possible,” Krishna said in his keynote​. He reminded everyone that we have entered a new era , one where computers can learn and even “operate independently” – but that it’s not an excuse to lose our minds. The subtext was clear: if you’re waiting for Skynet or HER to show up tomorrow, you’ll miss the actual revolution quietly happening today in healthcare, climate, ICT and business.

Krishna even tossed cold water on the holy grail of AI hype: AGI, or artificial general intelligence. “I don’t believe that the current generation of AI is part of the path to [AGI],” he remarked flatly​. Ouch, that’s a direct hit to the ego of every massive language model out there. And he’s not alone; veteran AI pioneer Peter Voss agrees today’s AI, impressive as it is, isn’t very smart in a human-like way. “Software isn’t very smart. If a programmer didn’t anticipate a particular scenario, the system would fail,” Voss has explained, recounting why he set out to build AIs that “could think, learn and reason like humans do.”​The prevailing AI models – from your voice assistant mishearing your request to that chatbot making up facts – are still basically fancy pattern matchers. Great at regurgitating info, not so great at truly understanding or reasoning.”

Voss advocates for cognitive AI that can learn incrementally and adapt, more like a human brain. “Our brain uses just 20 watts of power and can learn incrementally. Why can’t AI systems do the same?” he asks pointedly​. It’s a question that highlights the inefficiency of current AI: sure, GPT-4 can write your business plan, but it also devoured a warehouse full of GPUs and a lake of electricity to read the entire internet (and still occasionally thinks you asking for a burger recipe is a request for world domination plans).

So, trend #1 in cutting-edge AI: a quiet rebellion against brute-force models. From IBM’s top brass to independent AI scholars, many argue it’s time to make AI smarter not just bigger – focusing on reasoning, efficiency, and real problem-solving over sheer model size. This is the less sexy, non-obvious trend under the hype hood. It doesn’t make for as spicy a headline as “AI will end programming as we know it,” but in the long run it might matter more. The “AI is the new UI” ethos promises intuitive, conversational tech. Early attempts (like will.i.am’s FYI.AI personas) show how easily cultural sensitivity and good taste can be lost in translation​. Relatable AI is harder than it looks, and ethics can’t be an afterthought.. In short, quality over quantity (someone tell the GPTs to chill).

Agentic AI: rise of the self-starter bots

As if plain old AI wasn’t enough, 2025’s favorite buzz-phrase “agentic AI” has entered the chat. No, this isn’t AI with attitude (though some of those FYI.AI personas certainly had that). It refers to autonomous AI agents that can make decisions and take actions on their own, without micromanagement. Think of an AI that doesn’t just recommend a restaurant but goes ahead and books your table, orders your food, and maybe hires a mariachi band to serenade you – all unprompted. What could possibly go wrong?

At SXSW, this concept drew both excitement and dread. On one hand, tech titans tout agentic AI as the next productivity revolution: why waste time on drudgery when your AI minions can handle it? On the other hand, privacy advocates hear “self-starter AI” and immediately imagine a digital Sorcerer’s Apprentice, fiddling with your personal data in ways you never intended. As Signal president Meredith Whittaker bluntly warned, these hyped autonomous bots pose “profound” security risks to user privacy​. Whittaker is deeply skeptical about entrusting our digital lives to code that operates without a human in the loop. “Agentic AI” refers to bots that can reason and perform tasks for humans without their input​. While that sounds handy, Whittaker noted at SXSW that every step such an agent takes – say, finding you a concert ticket, booking it, then messaging your friends about the plan – requires dipping into your data. What shows you like, your credit card info, your contacts list… “At every step in that process, the AI agent would access data the user may want to keep private,” she cautioned​.

In other words, an overeager digital assistant could rifle through your life like a nosy butler. And unlike an honest Jeeves, it might not know what it shouldn’t do. Should your calendar bot really email your boss about that “doctor’s appointment” (that’s actually a job interview)? An agentic AI might, if not carefully reined in. The ethical concern is obvious: how do we prevent autonomous AIs from becoming, well, creepy little stalkers in the name of efficiency?

Ironically, even as Whittaker and others urge caution, the business world is salivating over agentic AI. Panels at SXSW had marketers hyping “multi-agent” systems to automate ad campaigns and customer service​. Everyone wants an AI intern to do the grunt work. And frankly, there are cool applications: imagine AI scientists autonomously running experiments overnight, or AI lawyers slogging through case files while you sleep (finally, a lawyer that bills by the millisecond). Unexpected applications are popping up everywhere. One startup demoed an AI agent that negotiates your cable bill for you – yes, it will literally sit on hold and argue with the Comcast AI on your behalf. Is this the dawning of the Age of Ultron, or just a clever way to never talk to customer support again?

The truth lies in moderation (and careful engineering). Peter Voss, who coined the term Artificial General Intelligence back in 2001, sees autonomous AI as the eventual goal – but only if built on a foundation of genuine understanding​. Without that, giving AI “agency” is like giving a toddler the keys to a Lamborghini. Sure, they’ll take action, but the outcome might be a demolished garage. The trend to watch: agentic AI will force us to embed ethics, guardrails, and yes, a healthy dose of human oversight into our systems by design. Otherwise, as Whittaker and even AI “godfather” Yoshua Bengio warn, these handy helpers could turn into high-speed privacy liabilities​.

So next time someone says “Don’t worry, the AI will handle it,” you might want to ask: handle it like a trustworthy assistant, or handle it like a toddler with a marker in a freshly painted living room? We’re cautiously optimistic that with the right guidelines (and maybe a timeout corner for misbehaving bots), agentic AI can deliver on its promise without delivering us into a Black Mirror episode.

Embracing the “Weird” and wrestling with intimacy

Not all commentators are focused on code and silicon; some are looking at AI’s impact on squishy humans – our creativity, our relationships, even our sense of self. Leave it to media theorist Douglas Rushkoff and renowned designer John Maeda to spin the AI debate in a surprisingly human-centric (and slightly trippy) direction.

Rushkoff, ever the counterculture champion, walked on stage at SXSW like a digital shaman and pleaded for a return to the “weird” in tech. He reminded us that the internet started as a freaky, creative playground – a place for fringe ideas – before it got strip-malled by Big Tech into something “almost entirely utilitarian – catering to the needs of the market for predictability and profit”​. In his view, generative AI (the kind that makes art, music, poetry, you name it) could be our ticket out of that corporate cul-de-sac. How so? Because now anyone can harness powerful AI to be outrageously creative, not just pump out canned marketing copy. Rushkoff argued that with these tools we can “embrace the weird, the creative – the human – and build a technological future that embraces humanity, not just the markets.”​

In other words, let’s use AI to amplify our funky originality, not to further ensnare us in some algorithmic shopping funnel.

Picture a future where AI helps artists mash up styles no human would ever combine, or where writers use AI to brainstorm wild story ideas. That’s the upside Rushkoff sees: AI as a partner in crime for human creativity, sparking a renaissance of the imaginative and absurd. It’s a trend to watch: AI as a creativity liberator, not just a productivity booster. Non-obvious? Perhaps – the headlines usually talk about AI taking jobs in banking or coding, not helping your indie band drop a killer album – but it’s happening. The tools that write code can also write surrealist rap lyrics on demand. The difference is all in how we choose to use them.

On the design front, John Maeda is grappling with a tension many hadn’t considered until he voiced it: Are designers now competing against AI, or designing against it? Maeda cheekily titled his annual tech report “Design Against AI,” a phrase he admits carries a double meaning​. “It’s kind of a double [‘design against AI’]: designers against AI, or am I designing against AI trying to compete? It’s got a duality to it,” he mused​. His point: creatives are now in a weird dance-off with algorithms. On one hand, designers are training AIs to automate parts of their daily work (think logo generators or layout optimizers). On the other hand, they’re also figuring out what not to hand over – what aspects of human creativity should remain, well, human​. Maeda urges his peers to reflect on “the parts that need to maintain their humanity”​. The big question: when AI can crank out 100 design variations in a second, what is the role of the human designer? According to Maeda, it’s to focus on the uniquely human insights, the empathy and cultural context, and the creative risks a cold algorithm might never take. In short, let the AIs do the grunt sketching, while humans do the soul-infusing.

And then there’s the truly squishy side of things: relationships and intimacy. Famed relationship therapist Esther Perel stepped into the tech arena to discuss what she calls “the other AI: Artificial Intimacy.” (Clever, right?) Perel has noticed something deeply ironic: in a world hyper-connected by technology, people are feeling more alone and disconnected than ever. She attributes part of this to a flood of artificial intimacy – pseudo-experiences of connection that tech provides which mimic the real thing poorly. Dating apps, virtual companions, endless DMs – they give a feeling of interaction but often lack the substance. “Artificial intimacy is all the experiences that we currently have that are pseudo experiences. They should give us the feeling of something real, but they don’t,” Perel says bluntly​. You swipe and get matches, you chat with bots or distant strangers, you might even have an AI “friend” to vent to at 2am. But at the end of the day, you’re still staring at a screen alone.

Perel’s insight hits a nerve in the AI ethics debate: even if AI can simulate companionship (and it’s getting spookily good at it – just ask the millions who told ChatGPT their problems), is that actually good for us? Does a chatbot that always listens make us better at human relationships, or just complacent in loneliness? She warns that relying too much on these faux connections can erode our capacity for real intimacy – the messy, demanding, but ultimately fulfilling kind that comes from dealing with actual humans. It’s an ethical concern that’s easy to overlook amidst all the gadgetry: the societal impact on mental health and community. If everyone ends up with an AI best friend and stops trying with people… well, that’s a Black Mirror episode we don’t want to live.

Yet, in true SXSW fashion, even this heavy topic had a flip side of optimism. Some are exploring how AI might enhance human relationships – for instance, AI tools that coach couples in communication (imagine an Esther Perel-bot giving you on-demand marriage advice, hopefully minus the divorce rates). There are also unexpected applications like AI “dating assistants” (yep, an agentic AI swiping and flirting for you – what could go wrong?). The key, as always, is balance and intention. Perel isn’t anti-tech; she’s just reminding us that no emulator can replace genuine human connection (at least not until Siri can give you a real hug).

Quantum Computing: “When and How”

While AI was stealing the limelight, quantum computing was the other headliner – albeit a more esoteric, brain-bending one. If AI is a wild teen pop star, quantum is like the eccentric older sibling who’s into experimental jazz: less understood, often hyped as “the next big thing,” and prone to people nodding along even if they only grasp half the words. But quantum had a big presence at SXSW 2025, and it’s time to demystify what’s really going on (and not going on) in this field.

First, the hype: Quantum computing is often sold as almost magical – computers that harness quantum mechanics to solve impossible problems overnight, break all encryption, cure cancer, and find your lost socks. We’ve all seen the breathless headlines about a coming quantum revolution. According to Reed Albergotti, a tech editor who moderated a quantum panel, there’s indeed a global race here. “This is not just a race for innovation, but for economic and geopolitical dominance in the future,” he noted, pointing out how the U.S., China, and others are pouring billions into quantum R&D​. In other words, nations see quantum tech as a strategic weapon – much like the space race or nuclear arms race of yore. That’s one reason for the hype: nobody wants to fall behind in what could be the foundation of tomorrow’s computing power.

But talk to the actual quantum experts and you’ll get a more nuanced story. Charina Chou, COO of Google Quantum AI, was quick to clarify one trendy misunderstanding: quantum computers are not here to replace classical computers or even AI – they’re here to complement them. “Quantum computing can drive discoveries in material science and healthcare that traditional AI cannot solve,” she explained​. It’s a specialized tool, not a universal one. So no, your laptop isn’t going quantum anytime soon, and GPT-5 won’t be running on a quantum chip in the fall. Instead, think of quantum machines as adjunct professors to regular computers: they’re extremely good at certain hard problems (like simulating molecules, optimizing huge systems, or factoring giant numbers), but for day-to-day tasks your trusty MacBook (or cloud server) still does just fine.

In fact, one of the most concrete demos at SXSW to cut through the fog was by Dr. Pedro Rivero from IBM. He showcased something that sounds straight out of science fiction: connecting an IBM Quantum System Two computer with Japan’s Fugaku supercomputer to jointly simulate a molecule believed to be integral to the origin of life​. The target was Iron Sulfide, dubbed the “‘cradle of life’ molecule” – basically a chemical thought to spark early biological processes. This quantum-classical tag team accurately simulated how this molecule behaves, something classical computers alone struggle to do​.

Quantum computing is already tackling niche but impactful problems, like materials science and drug discovery, by doing the math that’s beyond classical reach. It’s delivering value in these domains today, albeit in early forms​. As D-Wave’s CEO Alan Baratz put it, “Quantum computing is delivering tangible value today, but there is a lack of clarity about how to benefit from this technology now”​– translation: it works for some things, but most people don’t yet know how to use it or what for, unless you’re a PhD with a very specific problem.

So the non-obvious trend in quantum is a sobering one: Progress is real – more qubits, more stability, actual hybrid computing feats – but so are the challenges. Qubits (quantum bits) are finicky little devils; they decohere (read: “forget” their quantum state) at the drop of a hat, often literally if the temperature rises a smidge. Engineers at IBM, Google, and startups like Q-CTRL (founded by Michael Biercuk, a quantum physicist tired of the hype) are working on error correction and control techniques to keep those qubits in line. Biercuk has long warned that “the field is exciting enough without the need for overblown claims.”​

He’s basically the hype police of quantum, frequently reminding us that a useful general-purpose quantum computer isn’t arriving next week. Many at SXSW echoed this pragmatic tone: we need to talk about when and how quantum will impact us, not if. The question, as one SXSW article put it, is not whether quantum will change the world but “when and how it will happen.”​

Quantum and AI will work hand-in-hand – each doing what it’s best at​. Already, quantum computers (some with just tens or hundreds of qubits) are being used in real experiments – from searching new materials for batteries to understanding proteins for pharma. IBM’s SXSW demo of simulating an origin-of-life molecule with a quantum+classical hybrid is a prime example​. It’s a glimpse of practical quantum advantage in action, very specific but very powerful. Let’s not ignore that whoever leads in quantum could have a serious national security edge (codebreaking, advanced AI training, etc.). That’s why it’s become a space race of sorts​. But unlike the Apollo days, this time the competition includes corporations alongside countries – a collaborative and competitive arena all at once.

The society side of emerging tech

Zooming out, what do these advancements mean for society at large? This is where our cast of thinkers really gets diverse. We have folks like Melanie Rieback, a cybersecurity expert and entrepreneur, who isn’t talking algorithms at all – she’s talking about reinventing the tech business model to be more ethical. Rieback has championed something called “post-growth entrepreneurship,” essentially saying, maybe the goal of every startup shouldn’t be to become a unicorn and take over the world. Shocking, I know, especially at an event where every other person’s wearing a t-shirt with their app’s logo. At SXSW, Rieback joined others in exploring alternatives to the Silicon Valley growth-at-all-costs playbook. As one observer noted, “Melanie… talked about what it means to be truly disruptive. As you can imagine, it’s not the startup unsustainable formulaic pipe dream which every startup team/creator is hard sold.”​ Real innovation might mean building tech companies that prioritize sustainability, community, and long-term impact over just blitzscaling and exiting with a sack of cash.

This perspective is a needed ethical counterbalance to the hype. Because for all the cool demos and big ideas, somebody has to ask, who benefits? Who might be harmed? Rieback’s own company is a not-for-profit security firm – practically an oxymoron in today’s VC-fueled world – which exists to protect society rather than maximize profit. Her trend is a quiet but powerful one: tech for good, owned by the people it serves. It’s a reminder that the future of AI and quantum doesn’t have to be dictated solely by trillion-dollar companies; there’s room for co-ops, open-source movements, and non-profits to shape the narrative. When AI is helping run hospitals or quantum optimizes city grids, will the public have a say in how it’s used? Folks like Rieback are working to make sure the answer could be yes.

Lastly, weaving through all these discussions is a thread of societal impact that’s both hopeful and sobering. On the hopeful side, AI and quantum promise to tackle some of our toughest challenges – climate modeling, pandemic responses, education gaps (imagine personalized AI tutors for every child). On the sobering side, there’s a palpable fear of displacement and misuse. Who loses their job? Who gains a surveillance apparatus in their backyard? SXSW panels wrestled with these, often landing on an answer nobody loves but everyone accepts: governance and guidance are key. The tech won’t wait for us to sort it out – it’s on humanity to set the rules of the road now. As one quip put it, separating hype from reality is not just a matter of consumer protection, but civilization preservation. (Okay, that might be a tad dramatic – or is it?)

Keeping it real (and really interesting)

After all the talks, demos, tweets, and think pieces, what’s the verdict? Should we be laughing, crying, or doing a little of both about AI, agentic AI, and quantum computing? Classic. Quantum physicists promising world-changing breakthroughs… in due time, please be patient? It has the cadence of a long-running joke with an earnest payoff.

Next time you hear a keynote speaker proclaiming AI or quantum will solve all our problems, check if their other hand is on the fire alarm – because as we saw, the same folks hyping the future are often warning about it in the next breath. That’s not hypocrisy; it might just be wisdom hard-won from experience. We can be excited about the possibilities and critical of the pitfalls at the same time – in fact, we must.

So here’s to a future that’s a little bit sci-fi, a little bit satirical, and hopefully very savvy. A future where AI helps us be more human, quantum computing actually fixes things (instead of just impressing our brains), and where the only blackface we see is in cautionary tales that helped us build better safeguards. If we keep our heads on straight – questioning the hype, demanding the reality – we might just get the best punchline of all: technology that truly improves lives, with none of the tragicomic side effects.

Discover more from Heliade

Subscribe now to keep reading and get access to the full archive.

Continue reading