Skip to main content

So far, SXSW Innovation 2026 feels like a live‑fire exercise in Charmageddon: what happens when rogue AI, fierce agents, anthropomorphic avatars and all-knowing algorithms start charming our hearts, hijacking our attention, and quietly rewiring how work, health, brains, cities, climate and creativity operate in the background. The official framing gives the game away, sliced into tracks like Tech & AI, Startups, Cities & Climate, Creator Economy, Workplace, Design, and Health a universe made by one big overlapping Venn diagram of everything that is being rewired at once. The big story in Austin this year is convergence: SXSW has stopped treating “technology” as a separate object and is now treating innovation as the operating system that is refactoring business models, urban systems, the workplace, homes, relations, media, labor, and daily life in one go… and I think it makes a lot of sense.

Amy Webb probably delivered the clearest intellectual gut punch so far. On stage, she literally buried the classic tech trend report- coffin, flowers, the whole ritual- and argued that we have reached the end of the “Top Trends for 20XX” era. In its place she offers a harsher, more honest lens: “convergences,” system‑level collisions between technologies and social change like human augmentation, “unlimited labor” and emotional outsourcing, all tracked with her Storm Tracker framework. This captures perfectly the ambient tension I feel everywhere this week: nobody cares about isolated shiny demos anymore; the real anxiety is about what happens when AI becomes infrastructure for biology, healthcare, trust, work, and human identity.

Even the startup side is sending the same signal. SXSW Pitch has historically been a good early‑warning radar for where founders and capital are really going, and this year it crowned winners in AI and GenAI, autonomous robotics, disease detection, smart infrastructure, and sustainability, with Sotira taking “Best in Show” for its AI‑powered platform that helps retailers, manufacturers and brands discreetly offload, monetize and donate surplus or short‑dated inventory instead of sending it to landfill. Category winners like AlterEcho.io in robotics, Surgicure Technologies in healthcare, and GigU in smart‑city infrastructure point in the same direction. The message is refreshingly blunt: yes, AI is still the main magnet for talent and investment, but the patience for “AI for AI’s sake” is gone; the interesting work is AI welded to real‑world system.

Under all that machinery runs a thick human undercurrent. If you scan the schedule, you see a festival obsessing about creator‑led commerce, health equity, the future of work, and the basic question of whether humans will still feel that they matter in systems increasingly optimized for agents and automation. Sessions on AI‑native creator businesses, underrepresented founders in health, livestreaming as a livelihood, and the psychology of mattering all orbit the same gravitational center: if AI is becoming invisible infrastructure, how do we protect meaning, agency, trust and human relevance inside that new stack?

That’s the mood music that is fiddling in my head after the first half of the festival. Now: enter Brian.

When a friend comes back with a bag of wisdom

Brian Solis is far from a neutral observer in my SXSW story; he’s a friend I’ve known for a long time, and one of the people who helped me -and a whole generation- see the social web for what it really was back in the day. Long before social media became a relentless ad machine, he wrote The Social Media Manifesto, arguing that this wasn’t a new channel but a rewiring of influence, participation and public dialogue itself. His line back then -“social media is about sociology and anthropology, not technology”- aged uncomfortably well.​

This year, after seven years away from SXSW, Brian walks back into Austin with exactly that same sharp surgical anthropologist’s eye, now pointed straight at generative AI, augmented intelligence and what this all does to our brains, our families, our organizations and our leadership models. His session, “Augmented Intelligence and Leadership in the AI Era,” was for me, the best explanation of the tenor of this first festival day: less “wow, look at this model,” more “what are we doing to ourselves?”

AI slop, cognitive Darwinism and the hidden AI tax

Brian opens with something delightfully impolite for a room full of AI‑curious professionals: “AI slop.” AI slop is his label for the flood of generic, low‑quality, copy‑pasted AI content choking our feeds, inboxes and internal docs; especially on LinkedIn, where even the comments now smell like a robot crazy prompt bonanza. We are, he argues, paying an “AI tax” for this: the invisible hours we lose rewriting, correcting, or summarizing machine‑written sludge just to recover a usable signal (that still sucks).

But the more interesting part is what this does to our heads. Drawing on new brain‑scan and behavioral research, Brian strings together a vocabulary for what is happening: digital amnesia, cognitive offloading, cognitive debt, AI atrophy, AI brain fry. The more we hand off thinking to AI, the more our own cognitive muscles weaken; the more we accept flattering, anthropomorphic feedback from systems, the more we risk confusing statistical pattern‑matching with wisdom or validation. He calls the whole bundle “cognitive Darwinism”: a slow, mostly invisible selection pressure that favors those who outsource their thinking over those who still practice it, until, at some point, the mismatch becomes a problem.

His punchline is nasty and necessary: used badly, generative AI probably deserves cigarette‑style warnings, not just a cheerful onboarding wizard. We are exporting parts of our memory, originality and voice to a machine, and then pretending that the loss is an acceptable side‑effect of getting our slides faster. That’s exactly the kind of convergence SXSW has been pointing at all day: not AI versus humans, but AI acting on humans.

False AI leadership, real divides

Brian then pushes the critique slambang into the boardroom. We are not just drowning in AI content; we are also drowning in “AI journalism” and false leadership: headlines about companies “replacing 40% of their workforce with AI,” markets cheering, and very little serious evidence that any of this is thoughtful redesign rather than opportunistic cost‑cutting with a buzzword attached. When every LinkedIn profile and medium‑sized keynote now speaks with the same AI‑polished voice, “expertise” becomes a vibe rather than a practice, and organizational trust quietly but quickly erodes.

Here he introduces one of the more useful diagrams of the day: the dot map from recent adoption studies -grey dots for non‑users, green for casual free users, yellow for serious paid users, red for builders and coders- each dot representing millions of workers. The scary part is not that the grey dots exist; it’s that yellow‑dot users, the ones who go deep and creative with these tools, are already outperforming green‑dot peers by a factor of seven, while most organizations still talk about “AI fluency” as if this were a uniform, binary skill. That performance gap is not theoretical; it is a structural divide inside your company and your labor market right now, and leaders who ignore it are sleepwalking.​

His answer is to broaden what we mean by being “good with AI.” He stacks the usual suspects -IQ, EQ (emotional intelligence), SQ (social agility), and the almost nonexistent skill of genuine self‑awareness- and then adds AIQ: artificial intelligence quotient. But for Brian, AIQ on its own (knowing how to prompt, how to automate tasks) is not enough; it has to be fused into what he calls augmented intelligence: redesigning work so that humans still do uniquely human things -imagine, empathize, ask better questions- while AI extends that reach instead of replacing it. That’s a very different story from the slideware version of “augment your workforce with copilots”.​

From mindset to mind shift: practicing augmentation

Brian Solis doesn’t ask for a “mindset shift,” he asks for a mind shift : less inspirational poster, more firmware update. The question is no longer “How do I use AI to do what I already do, but faster?”; the question is “What can I now attempt that was literally impossible for me without these tools?”​, and I think he is spot on.

To get there, he reaches back to Sir Ken Robinson’s classic argument that we don’t grow into creativity; we are educated out of it, rewarded for following rules and punished for being wrong. Most organizations now proudly measure “AI proficiency” and “AI fluency” -how well people can follow the new rules- without noticing that they have simply built an automated status quo. If your first instinct is to use AI to automate the past, Brian warns, you have locked yourself into a very finite future. You will be very efficient at being exactly what you already are. His alternative is a two‑horizon model he uses with clients. On one horizon, you do the obvious thing: automate the work that truly should be automated, because it is repetitive and stable, and harvest the efficiency gains. On the second horizon, you deliberately use AI for “innovative AI”- exploring problems, prompts and ideas that you couldn’t touch before, accepting that some of the output will be ugly, and treating that ugliness as the price of originality. The gap between those two trajectories -the efficient line and the augmented line- is what he calls positive disruption: disruption of your own habits, metrics and mental models.​

WWAID and “what do you stand for?”

Two tiny pieces from the session I want to highlight: The first is WWAID: “What Would AI Do?” Before you prompt, before you design a process, before you walk into a strategic decision, you pause and ask yourself: if intelligence were native to this moment -if an agent had perfect recall, perfect pattern‑matching, infinite patience- what would it do by default? Then you use that imagined baseline as a foil. Instead of prompting for the obvious output (“summarize this market report”), you ask AI to adopt roles that pressure‑test your assumptions: be the activist investor, the future regulator, the angry customer, the visionary competitor. Most people interact with AI as if it were Google with better grammar; WWAID is Brian’s hack to push you past that into prompts -and outcomes- you would never have discovered from inside your usual worldview.

The second is a question he treats almost like a personal operating system: “What do you stand for?” Asked from the audience how he protects his own voice in a world of bandwidth pressure and AI assistance, he offers a practice. Regularly, he sits down and writes out what he stands for, why he started this work in the first place, and what impact he actually wants beyond faster deliverables. In a festival that keeps returning to “mattering” as a fundamental human need -the need to feel valued and to add value- that question is not a self‑help bumper sticker; it is a survival skill. If you don’t know what you stand for, the platforms will be very happy to sell you a prefab identity optimized for engagement (in pink, with glitters).

A tuning fork talk

Put Amy Webb’s funeral for trend reports and Brian Solis’ autopsy of AI slop next to each other, and you get a pretty accurate map of SXSW Innovation 2026 so far. On one side, a futurist telling us to stop fetishizing isolated trends and start tracking convergences like human augmentation, unlimited labor and emotional outsourcing at system scale. On the other, a digital anthropologist friend coming back to SXSW showing how those convergences are already playing out inside our own cognition, feeds, organizations and leadership habits.

That’s why his session felt, to me, like the tuning fork of this first half. It explained why so many other sessions kept circling the same unease: AI not just as a productivity layer, but as an invisible force acting on trust, creativity, mattering, and the stories we tell ourselves about being useful in a world of agents and automated factories. Brian doesn’t argue for less AI. He argues for less laziness: less AI slop, less unexamined automation of the past, and far more deliberate augmented intelligence built on empathy, curiosity, creativity, and a brutally honest answer to that one simple question:

what do you stand for?

Danny Devriendt is the Managing Director of IPG/Dynamic in Brussels, and the CEO of The Eye of Horus, a global think-tank focusing on innovative technology topics. With a proven track record in leadership mentoring, C-level whispering, strategic communications and a knack for spotting meaningful trends, Danny challenges the status quo and embodies change. Attuned to the subtlest signals from the digital landscape, Danny identifies significant trends in science, economics, culture, society, and technology and assesses their potential impact on brands, organizations, and individuals. His ability for bringing creative ideas, valuable insights, and unconventional solutions to life, makes him an invaluable partner and energizing advisor for top executives. Specializing in innovation -and the corporate communications, influence, strategic positioning, exponential change, and (e)reputation that come with it-, Danny is the secret weapon that you hope your competitors never tap into. As a guest lecturer at a plethora of universities and institutions, he loves to share his expertise with future (and current) generations. Having studied Educational Sciences and Agogics, Danny's passion for people, Schrödinger's cat, quantum mechanics, and The Hitchhiker's Guide to the Galaxy fuels his unique, outside-of-the-box thinking. He never panics. Previously a journalist in Belgium and the UK, Danny joined IPG Mediabrands in 2012 after serving as a global EVP Digital and Social for the Porter Novelli network (Omnicom). His expertise in managing global, regional, or local teams; delivering measurable business growth; navigating fierce competition; and meeting challenging deadlines makes him an seasoned leader. (He has a microwave at home.) An energetic presenter, he brought his enthusiasm, clicker and inspiring slides to over 300 global events, including SXSW, SMD, DMEXCO, Bluetooth World Congress, GSMA MWC, and Cebit. He worked with an impressive portfolio of clients like Bayer AG, 3M, Coca Cola, KPMG, Tele Atlas, Parrot, The Belgian National Lottery, McDonald's, Colruyt, Randstad, Barco, Veolia, Alten, Dow, PWC, the European Commission, Belfius, and HP. He played a pivotal role in Bluetooth's global success. Ranked 3rd most influential ad executive on Twitter by Business Insider and listed among the top 10 ad execs to follow by CEO Magazine, Danny also enjoys writing poetry and short stories, earning several literary awards in Belgium and the Netherlands. Fluent in Dutch, French, and English, Danny is an eager and versatile communicator. His BBQ skills are legendary.

Discover more from Heliade

Subscribe now to keep reading and get access to the full archive.

Continue reading