Hi All
GPT-4 is getting worse over time! Many have reported noticing a significant degradation in the quality of the model responses, but now here is the scientific report.
They found big changes including some large decreases in some problem-solving tasks!
For example, GPT-4's ability to solve "Is this a prime number? Think step by step" fell from 97.6% to 2.4% between March and June,
Other tasks changed less, but there are still major, measurable differences in the performance of these LLM services over time.
You can read the paper attached.
Anyone need a new Co-Pilot?
Regards
Caute_Cautim
Of course, once a system gets beyond the number four which is definitely a prime it is impossible to know if a number is truly prime due to Heisenberg's Uncertainty Principle. Many people have said that due to the Planck length being so small, large prime numbers would be made of wood, but it's very difficult to see how we could ensure continued discount delivery of mathematically minimally divisible integers alongside a large amount of forgettable TV shows and moves, with the occasional gem. A facet of why Prime Numbers are so difficult is quantum computing, we can be shor that Sure's Algorithm proves that it's impossible to conflate fractional and decibel prime numbers, however because of quantum computing we now all need to use "Only Single Time Notepads". These are so called as the page is very small so only one character can be written and that has to be a Cue bit, this helps the diligent cryptanalyst as a crib so that she can more easily encrypt a message she has not thought a key for.
Our LLM is called "Trevor the Data Thief" but we haven't seen him since he told us his name and professing a preference for Japanese food walked around a corner that seemed oblique to reality.
Nao mi kalkulactor dun wurk no moar.
The more I think about AI, the less I am bothered by it as a tool/concept. I'm more concerned about how we use it, specifically regarding citing sources for that which is not one's own creation. With ChatGPT, this happens at two levels. First people tend to (implicitly) take credit for its output as their own work, by not citing it as a source. Secondly, its responses do not cite its own sources -- which should be pretty much every sentence based on my understanding of how it works.
I also feel there is a general inability to gauge references for definitiveness, authoritativeness and bias, but that is the fault of our education system, not AI. It truly is a learned skill to use multiple references, understanding the bias in each and forming ones own opinion by applying a "weighted average" to them all.
@Early_Adopter wrote:the number four which is definitely a prime
Umm, 4 is not prime. Its divisors are 1, 2 and 4. Pardon me if I am being too much of a Sheldon to get a joke.
@denbesten with phrases like "Sure's Algorithm" and "Cue bit", it appears that @Early_Adopter receives output from ChatGPT, then reads it to a speech-to-text 😄
(heavily edited)
@denbesten yes, there's no getting past you, four is actually the first non-prime integer other than one which is a special case... 😛
@ericgeater Hey! That's all quality handcrafted, radically incorrect space-filler that, in the idiom of a properly badly trained ChatGPT instance, no mirrors, OCR, speech to text involved, just a couple of pints, a lovely parrot and a spirit of devilment..!
I must admit the worlds gone GenAI mad though - I was at the IAPP Asia Privacy forum these past two days getting some CPEs and I think every other session was on Gen AI, without necessarily highlighting the privacy/data protection issues.
Personally, I'm over it now and I think that the next steps simply have to be giving every trained model the vote...