AI Is Not Going Anywhere “BECAUSE I SAY SO!” sincerely, Tech
BY CHIZU UKACHI-NWATA
Oh yeah? Yikes.
The recent Meta-Youtube ruling has the chance to fundamentally change the way we engage with and view social media as we know it. On March 26,2026 a jury in Los Angeles found Alphabet (Google’s parent company) and Meta, negligent for their part in designing their social media platforms to be addictive and thus harmful to younger people. The verdict resulted in Alphabet and Meta owing a collective $6 million in damages to the 20-year old Californian who sued the platforms.
It’s certainly easier now to recognize how much the atmosphere has been changing in regards to our usage of online platforms and their usage of us. Age verification for children on platforms like Roblox, or even Australia’s ruling might have felt far away or isolated; only limited to certain demographics/age groups. However, a ruling like this is something that we won’t be able to scroll away from.
With shots this damaging targeted at these companies that for so long have been author, juror, and executor of dangerous policies and practices, it’s safe to say that Tech won’t give up easily and they are ready for war. And the threat assessment for them? Extinction level. If rulings like these keep happening and more digital safety measures are put in place, these “platforms” might become as jaded as the cigarette.
Our digital earth is changing, mirroring our very real one. They knew this was coming and they have been preparing for it, by betting BIG on AI. Capital Big for Billions.
The truth is the AI war is a narrative one and tech is winning. You would fight for something tooth and nail too if you had gazillions of money invested in making sure it works out. This seems to be their new life boat, their future, and they want to drag us into it; they’re even willing to journey as far as Sacramento to break bread with the youth.
Earlier this year, I was fortunate enough to attend ConnectSafely’s Safer Internet Day 2026 on behalf of the Youth Leadership Institute’s HOPE (Healthy Online Platforms for Everyone) Coalition.
We were joined by representatives from Google, Meta, Snap, Roblox, and Discord (all headquartered in California) to name a few, in a day filled with research roundtable discussions and moderated panels.
I loved being around other like minded youth who did not buy anything the companies were selling. Tech faced a lot of resistance during the moderated panel. And with every question asked by youth in the audience, it was clear that Tech was more invested in circumventing answering them than actually doing so.
- Gosh I should get into PR. The PR firms are the real winners in all this-
These companies are focused on optics and hypocrisy as opposed to honesty and integrity, like any other greedy cooperation, so why do they seek special treatment? Why does tech want to pretend like they are our friend and not just a business, and why are they ashamed of their actions?
Their representatives answered no questions and deflected completely, shamelessly. It’s not lost on me that the representatives of this job sector in the tech spaces were all white women with backgrounds in psychology and such... It’s not a coincidence, it’s not feminist, it’s intentional. Intention, because narrative is powerful.
What is that called? Greenwashing? Something washing -
It’s hard to imagine what their thinking was with attending, perhaps that this would be another opportunity to try to control the narrative? A performance has to be kept and maintained, like a muscle or attention.
My takeaway from the day was as follows: The good news is, there is AI stigma. The bad news is, they want to get rid of it.
Think Big tobacco releasing the cigarette for the very first time today.
Usher in the propaganda, Language PR.
Here are things I’ve heard about AI (you might have heard them too), that I am sick of and cannot prove but I believe are part of the very necessary propaganda campaign needed for them to thrive.
“AI is a tool”- Yeah but what about words like substance, service, weapon? Is an algorithm that is specifically tailored to each individual to exploit their insecurities to no end, unlimitedly, not a weapon?
“AI is here to stay”- Well of course you need that to happen so you make your money
“Whether we like it or not”
“AI is inevitable”
“AI is the future”- You said that already
Now if I’m remembering my Lundy Bancroft correctly, this is abusive language; it gives us no way out and defines things for us.
True it’s not all bad, and it never really is. Cigarettes relieve stress, but ask any medical professional whether or not that is a long-term solution for your mental health.
LLMs are able to do so much good in their roles as study aids. LLMs’ ability to aggregate and generate practice problems, as well as learning assistance/augmentation, is where it really shines. But they don’t just stop there, the horizon is too small for Tech there, they want to bet on addiction, on gambling, on insecurities, on fears, on desperation, because it works for their bottom line.
Usher in my frustration.
Why haven’t we learned from the social media catchup we are currently playing? AI is not even starting off on the right foot. Literally, its eco footprint is devastating.
How can we laud anything as innovative in this day and age if its very foundation relies on environmental destabilization? Data centers have already begun to continue decades long practices of exploitation of marginalized communities.
Are we so ready to accommodate and inaugurate new diseases such as AI Data Center induced Asthma, like we did and do for crude oil at the expense of Black and Brown people globally?
Where is our choice? Consent is implied with AI. Google, the de facto internet as we know it, in its rollout of its AI overview feature forced participation. You didn’t get the choice to opt out before it happened, and now you have to go out of your way to now specify “-ai” for otherwise. Where are the options?
Perhaps we are the ones being unfair to these companies by asking too much of them? I think that’s our jurisdiction, it’s on us to fight for regulations and set boundaries and hold these companies accountable not them. It’s up to us to have honest conversations with ourselves on our reliance on these substances and how it relates and translates differently across intersectionalities, class, and marginalized communities. Not them.
That responsibility is up to us as they’re not qualified for that themselves. We tell them to make their platforms better and they come up with measures to accommodate their objective and put the onus on others. They get craftier committing their crime.
They produce red herrings like parental controls; controlling the narrative, getting ahead with a fake bandage on top of a wound that hardly addresses the root issue.
And they cannot go against their very “nature” and attack themselves, nature wants to preserve itself. Tech does too.
We need to be at the helm of this narrative.
Yet a lot of us are already compromised. I use AI, I’ve used AI. I like to tell myself that I know what I’m doing and I believe this to be true. I think I have strong boundaries with [insert AI chatbot here] and I am always sure to give it specific direction ensuring my desired response; however, that in itself, is not a privilege everyone has.
Scrutiny is a privilege, using discretion and thinking critically are byproducts of an education. And those very privileges needed to skillfully navigate its usage are possibly being eroded by LLMs in younger learners, according to an MIT study.
At Safer Internet Day, CommonSense Media presented research about “Betting on boys,”; which talked about how boys are being groomed into gambling early on through video game microtransactions. The solution presented was to strengthen age verification. But why not stop these companies from doing these microtransactions in the first place? What does the exploration of that question look like?
What could a change in narrative, a change in our questioning achieve?
What place does a company that is in court fighting, making shameless arguments on the welfare of children have in our futures?
Is social media and now AI an essential service? Have we really reached the point of no return whereby if regulations were put in place the modern world would crumble?
Will our questions at the seminar be obsolete and relegated to being products of the time, a time where despite the addiction, the ecocide, and the suicides, we still ask tech what the future of youth online looks like and wait for their answer?