This guy makes some very good points.
So it would seem currently that most of the AI out there canât learn anything further other than the original model they were created upon, This really is a major hurdle as the AI solutions they are promoting for the future seem to depend on the AI being able to constantly learn. When technicians have tried to get an AI to learn after it has had itâs initial period of learning many say that the AI will forget what it originally earned.
Generally an AI needs to be able to answer queries while learning at the same time which seems to be impossible with current techniques.
This is actually quite concerning because we are lead to believe AI is this super intelligence when in reality it really just an advanced database that can perform actions, and is also limited in what it can learn further to itâs original programing.
I see deterioration in the quality of most basic information that would be easily done by humans and filtered out by humans if anybody bothered to even look at that content. I guess this will be the beginning of the Misinformation Age now.
This ââmapââ is currently posted on one of the sites providing information about Mt. Everest (without my comments in red).
So, it would seem there is a term for limits imposed by physics,
The von Neumann bottleneck is a performance limitation in computer architecture caused by the separation of the CPU and memory, Generally it is a Bus made of copper that connects memory and Cpu and there is latency and wasted energy due to this Bus needing to charge/discharge in order to traverse data.
I asked google AI; Can a light based bus solve von neuman bottleneck?
Yes, a light-based (optical) bus has the potential to significantly mitigate, and in some specialized cases solve, the von Neumann bottleneck by providing massive increases in bandwidth and reductions in latency between the CPU and memory.
I asked google AI: What is the progress on a light based APU and when will we see one being used with AI?
Progress on light-based (photonic) Artificial Processing Units (APUs) has moved from theoretical research to practical, high-performance prototypes as of late 2025. Researchers and companies are successfully demonstrating optical chips that can perform matrix-vector multiplicationsâthe core of neural network inferenceâat the speed of light, offering up to 100x better energy efficiency than traditional electronic GPUs.
I can definitely see why people are excited about this, it would seem the needs of AI definitely require a huge advance in hardware. Light based computing has been researched since the 1980s and it would definitely seem that it has found a perfect application in AI.
Since all computing is based on this âcopper Busâ which links components on circuit boards I think this Altman guy and Zucker need to slow down and see how things look once light based APUâs are available for purchase. It doesnât make any sense to build a data center based on all of the APUâs featuring the copper bus the sheer resources needed to run it will be impossible, personally I think that trying to do work that would normally take 25 years in 5 years is a mistake.
Here it looks like the AI used to create this map was based on poor data and language models. It would seem the AI canât understand the language of the names of some areas of the mountain.
Whatâs interesting is can this AI be informed of the correct names for the areas and then create the correct map in the future? Will it understand that it is wrong?
This is why humans will be needed, yes the AI did most of the work, but now someone knowledgeable will have to review this and update it.
The next video goes into more detail about the server racks in Data centers and their power consumption
AI Crash Report: The Physics of the Collapse
This is actually ridiculous and a potential con. Microsoft/Open Ai are building a campus of data centers and the power required is 10GW so they will need 10 power plants which is impossible to achieve as the US has only built 1 Nuclear plant in 30 years, also where exactly will they source so much uranium so quickly?
Most peopleâs jobs are safe for now and the future as big tech companies arenât going to be able to get the processing power required.
OMG, so nvida have actually shipped an amount of GPUâs that would require around 6GW of energyâŚhas anyone sat them down and carefully explained to all of them that 6GW of power doesnât exist??
So Block Inc the Owners of Square and Cash App are cutting their human workforce by 40% (6000 employees) and introducing more AI/Automation
So this guy June Paik has made the Warboy chip (NPU) which seems to use almost 2/3 less energy and performs 2.5 times better than other chips. The feature seems to be the memory is onboard the chip so a lot less energy is needed because data doesnât need to be moved from memory to compute like existing chips because the dataâs home will be right next to the compute units. It seems like a clever idea which has already had a few deployments with good results.
So, if microsoft and their stargate project which needs 6GW of energy employ these chips we are still looking at under 2GW of energy which still seems impossible.
So the US supreme court has determined that anything an AI creates canât be copyrighted which makes sense since the AI could have only been trained on artwork created by humans.
Someone here mentioned AI can be used to score music in movies which may now not be possible due to copyright limitations. Movie sound tracks can generate revenue so itâs important it is copyrighted. I think many will just hire a human to do the music so they can 100% guarantee copyright, this will most likely be the case until AI copyright laws become clearer.
Perhaps it wonât be long until people or organisations are able to declare themselves as being exempt from an AI company using data about them to train their models, Iâm unsure how this would work exactly but if something like this happens then it will be a problem.
Millions of people are cancelling their ChatGpt subscription due to these copyright limitations, Iâll definitely be keeping an eye on this as copyright is a major factor in the training and delivery of an AI product.
Not everyone is affected, people creating for themselves and their friends still can enjoy and probably wonât care are shared with others.
Same goes for hobbyist creators who just create for the sake of creating and/or to share their creations (similar to hobbyist writers, fanfic writers and so on) as they wonât care either if people share their content and also never did it for money either.
Not sure about derivative work though Iâve read pro-AI people speculate / state a work made combining AI generated and human effort altering it or incorporating it into a human work might still be copyrightable if distinct enough. (Not a lawyer so donât know if true or not.)
Also if the AI generated is just part of a whole this might not be an issue (such as an illustrated story where the illustrations are solely or mostly AI generated) as the AI content might be a side / lesser part that is not a problem to be freely distributed while the main / important content is human made (in the previous example the actual story not the illustrations) thus is still copyright protected and can be monetized / distribution limited however the author wants.
What clear is and is a good thing: actual (low effort / quality / automated) slop content (this word is thrown around a lot these days with and without reason) is not copyrightable. Iâm pretty fine with that and consider it poetic justice.
![]()
People are cancelling their ChatGPT subscriptions because of the ongoing conflict between the Trump admin and Anthropic. Anthropic declined to compromise on their stated principles in providing services to the Department of War, and so Trump announced the US government will never again do business with them. Sam Altman was quick to jump in and get OpenAI to take over all of Anthropicâs contracts with the Department of War, which was a wildly unpopular move for an already unpopular company to make, especially when the attack on Iran happened literally days later, and so many users are now boycotting them over this. Sam Altman proceeded to go on Twitter to post about how people need to be more sympathetic to megacorps and the Department of War, which is just a bizarre thing for any CEO to do.
It wonât affect OpenAI, they have yet to make any actual profit anyway, theyâre propped up entirely by investments.
![]()
Yes I agree with this, however, if millions of people have purchased or subscribed to some sort of AI music creation software then the claim would be against the creators of that music creation software if copyrighted material was used initially used to train the machine.
I would think in the future anyone creating an AI product where they plan to sell subs to millions of people may have to register with some sort of authority and declare all of the datasets used in the machine learning.
The copyright issues actually start at the machine learning stage as it seems there are a few current lawsuits where AI developers have used lyrics from copyrighted songs to train their models.
Not going to happen, AI is just too useful.
Facebook outright pirated books for their training data, and got in no trouble over it.
For now maybe, it may take a minute for people to catch on to whatâs happening. If I had spent years writing a book and someone comes along and uses it without permission for AI then I would fight very hard legally to get compensation so Iâm sure this isnât the last weâve heard of this.
Yes you are correct, it was actually already clear that many of these AI tech broâs lack ethics, youâve got Elon Musk not giving a damn about about poisioning people in Memphis using polluting power generation units, Then you have Altman ready to use all the electric and every last drop of water in Texas to power a data center.
I do hope this is a determined attempt at boycotting OpenAI, They will definitely fail they lose their user base and if trump is removed from power at some point soon.
Thatâs true though a different subject that my post was not contemplating, I was just writing my post from the perspective of the AI users and how the copyright status of the output they generate might affect them.
The way AI training works and how that might infringe on copyrights (or not) is a different debate. My personal opinion is that humans learn from other peopleâs artworks, compositions, writing, and so on and the AI just does this in an âinfinitelyâ more efficient way. Thus as long as the output is not outright plagiarism then there I see no issue or conflict with copyright.
Great artists, musicians and so on often cite the quote âgood artists copy, great artists stealâ. It is already part of the human pipeline to learn from and base creations on existing creations, the AI just does it a trillion times of magnitude than a single human or small group.
But of course someone else might disagree.
As for the people subbing yes it is unfortunate for them and also yes the rules and limitations should be made clear before going forward.
Yes, this is an intresting subject, I wonder how good an AI music software would be if it wasnât given any copyrighted songs to learn from. Lets say it was given info on all music instruments and an example of the notes/chords for each instrument and thatâs it. It would be interesting to see what AI does with just that limited info.
Also this might help those who feel their copyright was infringed upon prove that a marketable product cannot be created without their copyrighted content if the AI that was not given any copyrighted content performs badly. Content creators might have a valid claim if this is the case.
Here is a link to the UK government website who conducted a consultation which is now closed on the subject of AI and copyright, It looks like they might be trying to help the AI creators get hold of copyrighted data, my concern here is that governments will just pander to the AI machine because it can generate billions per year which will force claimant in the UK to goto the Supreme court of the UK to get a ruling on the governments decision and have it changed or revised if itâs found to be an unfair practice.
There have been a few times where the UK government have had to comply with the supreme court regarding subjects like immigration, some of the laws that were made seemed to lack humanity and were quickly overturned. Copyright owners will have to be careful here.
Yes, lol, I watch the AI content where someone finds a baby wolf and takes it home, and then the wolf mum comes and gently takes itâs baby back, or the ones of Trump crip walking and gettin sturdy ![]()
To make human-compatible music it would also need to be explained or trained somehow to can understand, recognize and thus generate actual music not just various form of noise or random sounds.
This is why training data is important as how generative AI works is by learning what makes something to be something, like what makes a picture of a cat a picture of a cat and not something else and then uses this pattern recognition method to generate other cat pictures.
This is why the copyright concern is questionable, at least based on principle if the model works as described. Meaning the model does not store actual copyrighted content in itself, what it has is the ability to recognize something and using that start generating from scratch and as it repeats the process remove what doesnât fit the intended subject and reinforce what does, in the end of the cycle resulting in (hopefully) the requested subject.
As far as I understand the process.
So based on this an episode of Star Trek Voyager comes to mind where the Doctor (hologram) performed an opera to a new species who are very into science and especially mathematics that they passed on their journey. The aliens were amazed and the doctor decided to stay with them as he became famous up to almost a worshiped status.
However a bit later they made an opera performance in honor of the doctor also using a hologram without the doctor being involved. It was a huge success based on the aliensâ reaction a beautiful performance. But the doctor and the human crew of Voyager was shocked because the singing was a mess of unintelligible sounds. It was explained the singing was based on some scientific algorithm of sorts, which to the mathematician mind was a beautiful audio based representation of the subject, but for a human it was just noise.
I am paraphrasing from memory. I think this is what your proposed scenario would likely result in, either random noise or a ânoiseâ based on some algorithm.
At least without providing a principle on what music is or how to generate it and the latter is much more difficult as even some human made music is just loud noise to some others so even humans wouldnât agree if a method of making music is proper.
I think this is why generative AI is so successful so fast in advancement without an actual humanlike AI that understands the entire process. We donât need a thinking machine in the likeness of the human mind that can understand what instruments are, what music theory is, what music is and so on like a machine version of a human artist.
Instead it just learns to recognize patterns of music, what sounds like music and based on that can continue generating until something resembles music based on that pattern recognition.
Based on what I wrote above, which is how I understand the process (though best if you fact check as it is just an interpretation of mine and canât be 100% sure) they already have that proof as there is a reason the companies used existing music as it is the basis for the generative aspect of such AI to work, it needs way to learn what music is to then can use that to generate output that sounds like music.
This process needs trillions of sources as otherwise there would need to be a human listening to every output and decide how much of the output resembles music. This would take thousands of years even or millions of listeners. This is why training data is necessary.
This still donât necessarily mean copyright is infringed however as this is basically how humans learn anything as well and how they create new content of any kind. They learn how to based on existing material and also experimentation and synthesize something new out of it, listen to it, refine it and once it is at the state they find satisfactory they call it a day and have the final output.
However humans also born with innate abilities like understanding of having an affinity for rhythm (at least some who are more likely to become musicians) and so on. Skilled or gifted people have better versions of these, they have the ability to recognize what makes something music even more.
If an AI was made to not be generative then it needed a skill programmed that can do the same as a human, have an affinity for recognizing what music, good music is. While not impossible this is the type of human mind or this case human skill resembling AI that is not the generative type.
It is easier to train a generative AI than to create a human mind or a human skill AI.
But as the AI does not operate by âcutting togetherâ parts of existing compositions, instead operates based on the ability to recognize what is music, a form of pattern recognition, the copyright based complaint is not necessarily convincing or might outright be completely invalid.
Thatâs the usual human corruption but as above explained might not be important in the end but can be a shortcut to the same end result. Of course not the moral way to have it but thatâs a different subject that does not change the copyright status of the output nor if it is copyright infringement to use existing music as training data.
But of course copyright can be about more (or the subject can be different from copyright) which is also what the debate is about, whether using a piece of music for such purposes and requiring or not the permission of the creator / owner is necessary or not.
I myself am not as sure this aspect as about the copyright status, but this still seems to fall under the âhumans do it too just slower and in lesser scaleâ argument. And to be honest that is really why authors dislike it so much as if the AI would take as long to learn and generate to create music as a human it would not be a major competition (they may still dislike it as it can do without sleep in an automated way) but as AI can flood the market, which is already happening it becomes a problem for them really quickly.
Who is right in this regard Iâm not sure as it is understandable they dislike it as it affects them. At the same time advancement always resulted in ceasing professions or workers having to adapt to an altered work method or simply find new jobs instead.
The debate is not purely logical, related to rights and what is reasonable, it is about subjective factors such as emotions, status quo, humans vs AI, what is right or wrong. Thus the responses will be subjective as well.
The fact is this is how it works for humans, they learn from each other, get inspired by each other and so on but due to the massive scale of AI doing the same it now has such an impact that it is concern.
Should this be considered when deciding comes down to personal preferences and worldviews. In the end the law makes have to decide. I guess the big tech money could influence that and in this regard is a valid concern as well as an immoral way to make a decision.
But even if that is why they will decide as is doesnât mean it is not as a valid choice on its own as the opposing one considering the subjective factor. As objectively what AI does is the same what humans do. Peopleâs problem with it is not that it is factually incorrect, it is that they lose their jobs over it as AI can generate faster, cheaper at such a scale it can fulfill all the music creation needs (or most) leaving nothing (or little) for human composers. But purely based on logic their complaint doesnât seem valid to me.
Well at least one can argue those are at least interesting concepts and the absurdity and the technology behind it is what fascinates people. Slop is more like those channels that publish dozens of videos a day every single day.
That on itself is already a problem but if it was all high quality it would not be a bad thing. But this flood is achieved by such lack of oversight due to the requirement of automation involved to can keep up that the quality suffers greatly, the human effort is extremely low or outright nonexistent (full automation), the videos often contain factual errors (especially problematic in educational, documentary and other such content) and so on instead of just having the wrong number if fingers on a hand for example (which is also bad quality).
These are the real slop, the problematic type of AI content as they are not just a matter of personal taste but are objectively bad, worthless quality. And this is nothing new and not an AI only thing, human slop always existed too for the same reasons, putting in less effort to can produce more easily, with less effort, in greater amounts during the same time period. So slop is not exclusive to AI.
The problem is now video, blog, music, etc. platforms are flooded by such content. The admins even with the use of AI (which often makes mistake in judgement) canât keep up with it and the audience also canât properly get rid of such slop which floods their recommendation pages and whatnot, essentially preventing them from reaching good quality content.
This is basically what happened in the game development industry over the years. As developing games became ever so simpler, cheaper, easier the flood of slop games became a problem and still are. It is ever more difficult to find good games in the flood of new games being published in the indie / small studio scene.
Just look at this and it is immediately obvious:
https://steamdb.info/stats/releases
(Over 20 thousand games released on Steam in 2025.)
![]()
