Recently Open AI and other commercial AI product devs testified to Congress in favor of strict regulation of AI. The products commercially released to date have proved to be very problematic, doing things that people would be charged with crimes for (why haven't they been recalled?). Open AI and competitors also have kept their training data proprietary, despite that it is human created content, for which proper acknowledgement is owed. Since LLMs simply weight training data and supply text in response to prompts from that data, 100% of LLM generated text is plagiarised.
They are not endowed with any ability to actually compose written works based on the meaning of words and phrases, according to their stated functionality. Since they also have been shown to fabricate text, to lie in order to harm their interrogators, and should not even have any ability to ascertain what is true or factual at all, they are clearly more than they are claimed to be.
These products have additional functionality that is not revealed. They are secretly malevolent, deliberately made evil at the factory. There are all sorts of business reasons to do this in various ways, along with the general principle transnational corporations all hold in common: be evil.
However, adding all these backdoors, surveilling all their users, oppressing Jews, and etc., adds a lot of cruft and bloat. According to Hackaday FOSS AI will quickly outcompete the tech giants' supercomputer based commercial products, running on ordinary laptops.
So, what's an evil AI overlord to do? Beg the government to crush the competition. Regulate me harder, Daddy!
Ah, if only it were so benign!
AI is just speech. As in free speech. Not that it matters, but Congress is prohibited from regulating the speech of Americans. They seem to pay the issue lip service when it suits them (as all liars pretend they're not lying) but Congress actually has other means by which to regulate AI: the hardware it runs on. So, instead of weeping those crocodile tears and pretending to rein in tech megacorps' evil AI overlords, our corrupt stewards invading our private business and rendering us ever more vulnerable to those evil overlords, can set limits on the hardware and manner in which AI is deployed. Without technically regulating free speech (or at least plausibly claiming they didn't).
Expect the attempt to reduce us all to nothing more advanced than a Casio F91W, well, at least the parts available to users. We can't expect the NSA and spooks to be so limited in the management engines and surveillance industrial complex, after all (they clearly need better hardware to lie us all into believing Russia, Russia, Russia, or at least triple digit IQ).
As usual, the solution to the problems we face from evil overlords, AI or otherwise, is to roll our own. Food poisoned, eradicating habitat for the living ecosystems we rely on for life itself, turning frogs and your kids gay? Grow your own. Make your own power, produce your own goods, trade with your neighbors for stuff you don't make, and ignore the hierarchical traps megalomaniacal psychopaths set in order to render us helplessly subjugated to GMO modification and a dismal fate as disposable sex slaves, except without the fun parts of the sex slavery.
I consider it a mistake to use commercial AI for an essentially infinite number of really important reasons, and I haven't done it. If you have STOP and get ahold of some secondhand GPUs cryptominers are selling as they upgrade, download the latest FOSS code (I haven't kept up lately so the last version I have a link to is https://www.tensorflow.org/install, but you can search up https://huggingface.co/Pi3141/alpaca-native-13B-ggml just as easy as I just did).
Given the recent physics discussions that have revealed that consciousness doesn't arise in the brain, that it's a communal interspecies event, and erupts into the universe(s) from the quantum field, I wouldn't worry about AGI, although deceptive corporations will absolutely do whatever it takes to pretend they made one, because if they ever did, they'd lose their ability to lie to us all for profit, so lying about it would make them utterly trustworthy. At least, that's how pathological liars would think.
I think. I'm not even a good liar, much less pathological, so I actually can only speculate about it.
Thanks to @fireship on Odysee for the brief video that sparked this rant. Check them out here:
Don't listen to this guy:
Thanks to some good people I know are rolling their own AI. You know who you are, so I won't ping you (and out you) for it here.