At the End of Algorithms, Rediscovering Human "Vulnerability" and "Authenticity"

At the End of Algorithms, Rediscovering Human "Vulnerability" and "Authenticity"

It’s Sunday afternoon, and I made a special trip out to the tech park area to meet up with Ting for dinner. She spent her early career at a top-tier AI lab researching low-level logic, but in recent years, she’s shifted her focus to the intersection of tech ethics and the humanities.

I: Ting, this recent thing has really been weighing on my mind. I got recommended this podcast host, and listening to her felt just right, like she was perfect in every way, with skin so smooth you couldn’t even see the pores. But when I clicked into her homepage, it said “Made by AI,” and right then… my trust just crumbled. I unfollowed immediately. In that moment, I wondered, who’s really pulling the strings behind this? I don’t believe a single word.

Ting: Yeah, I totally get that sudden wave of rejection you’re feeling. You know, this instinctive pushback is actually us humans protecting our most tender parts. The market for AI influencers is already huge, nearing 7 billion dollars, and brands think this is the future, using flawless, blemish-free images to connect with everyone. They mix up a “perfect cocktail” for you: puppies playing in the snow, cute and sexy podcast hosts, stuffing your feed so you can’t help but scroll 10%, 20%, 30% more. Everything’s tailored so precisely for you… but I always feel like there’s a kind of violence lurking behind it, quietly robbing us of chances to encounter the real world.

I: Yeah, the more perfect it is, the faker it feels.

Ting: Interestingly, those at the forefront of visual creation are starting to notice this. Now, they’re using AI to deliberately create “imperfection.” I know a photographer whose common prompt is something like: “Shot with iPhone 3GS 2009, photo quality, low resolution, JPEG compression, artifacts, slight motion blur, blown-out highlights, muted colors, digital noise grain, amateur snapshot, candid moment, soft focus, low dynamic range, slightly washed-out real photo.”

I: Wow, that’s totally the opposite! Weren’t we all obsessed with high-def and heavy editing before?

Ting: He told me he hasn’t sat down to meticulously edit a picture in ages. When technology can generate a perfect image in a second, the human craving for that “real feel” with noise and flaws becomes the rarest treasure. The secret to growth might just be learning how to ride this wave.

I: So, deep down, what we want isn’t perfection, but authenticity? Even authenticity with a bit of pain?

Ting: Yes. The old logic always said vulnerability is a weakness, but I’m increasingly convinced that in this era chasing “authority” and “perfection” everywhere, vulnerability is our strongest weapon. AI can imitate a lot, but it can never truly experience the heavy choices made out of love, or understand the gut-wrenching pain of losing a loved one. It has no heartbeat, no tears, so it can’t simulate that warm, living vulnerability.

I: Sounds a bit heavy, but it makes sense.

Ting: My life journey has largely come from those adversities and survivals. When I was very young, I lost my mother, and later I went through several life-or-death crises myself. Carrying these core memories forward, there’s no room for self-pity. You’re not just “figuring out” solutions, you “have to” face them. It’s not ignoring the emotions of the moment, but because the feelings are so overwhelming, I have to detach a part temporarily to look at everything with clear, rational eyes. It’s those pains, and the transformations that follow, that forge true human strength.

I: So, no matter how smart AI gets, it can’t replace those fundamental human experiences. But on the flip side, from a pure science angle, it seems like human knowledge is about to be squeezed dry by AI.

Ting: The essence of math is really the human process of understanding the world bit by bit, breaking that understanding into tiny logical steps that others can grasp and verify themselves. I believe by 2030, we’ll have theoretical explanations for everything in the world. In the future, anything that can be expressed in mathematical form, those underlying logics supporting all sciences, will have corresponding theoretical frameworks.

I: So, no more need for humans in scientific discoveries? All handed over to machines?

Ting: Quite the opposite. Even AI needs to explore the unknown through constant mistakes. I think “hallucinations” are an essential part of any reasoning system. Whether in international math olympiads or everyday tests, systems try countless paths that seem dead ends. It’s this exploration, after enough tries, that leads us to the right answers. The “entropy” of the system is crucial, and today’s pursuit of “completely zero-hallucination” large language models is actually untenable. More importantly, this hides a deep philosophical question: Will our future path of scientific discovery be monopolized by AI labs? For example, will a cure for cancer be wildly deduced from a massive AI lab’s two-gigawatt data center, scooping up all the commercial value? Or will it be millions of ordinary people, empowered by these tools, exploring independently yet collaborating to achieve breakthroughs?

I: I hope it’s the second one. The first sounds too dystopian, turning ordinary folks into spectators. Even if it’s the first, right now we’re still worried about being replaced by AI.

Ting: Yes, this blind fear of “being replaced by technology” deserves deeper exploration and reflection. Many people hear AI and instinctively think “end of the office,” “doomsday for marketing.” This fear often blocks us from really thinking: “So, what new skills should I build? What processes do I need to adjust?” Ironically, the companies most worried about being eliminated are often the least willing to invest in reskilling their employees, which only heightens their vulnerability.

I: So, how do we ordinary people and society build that “million collaborations” future?

Ting: We have to reexamine connections between people. A recent study shows that in the US, 90% of adults need outside help to build real ties with their communities. I sincerely hope we can gradually realize that the places and spaces we’re creating can, and should be seen as a form of “high-level care.” We shouldn’t act hastily out of fear of replacement, but think deeply: How can we genuinely contribute to unleashing human potential and building social trust? This is the role future offices can play a bridge for connections, a trustworthy platform. So, going to work isn’t just “having to sit next to colleagues,” but sitting next to neighbors, quietly closing the distance while each produces efficiently. When technology handles efficiency, humans should return to the most genuine connections.

Source:
Mathematical Superintelligence: Harmonic’s Vlad & Tudor on IMO Gold & Theories of Everything
Cognitive Synthesis and Neural Athletes
Preparing for the Future Workplace | What the F* is Happening to the Office? | Mark Bryan
How I Make AI Images Look Less Like AI (Full Workflow)
I Let AI Build My Business… Here’s What I Learned (and you can do it to) @askcatgpt

0 Likes 0 Comments