I’ve found GPT-4 to be worse now than a few months ago… I feel the quality of its output has declined, especially when generating article text, even more so when generating fiction prose…
For my (still in progress…) novel, sometimes I’ll give GPT4 a specific passage of text and ask it to reword it, or to enhance some of the scenery details… and looking at what it generated just a few months ago, now it seems… stupid.
One character is inspired by characters I’ve seen played by Michelle Rodriguez, and in previous sections of text the AI got descriptions of her mannerisms and attitude spot on, but recently it literally compared the character to Michelle Rodriguez in the text itself…
That sort of behavior is a lot like what you get from smaller LLMs with fewer parameters. I wouldn’t be surprised if the money hose for AI has slowed down and companies are doing things behind the scenes to save money, such as serving up lesser versions of their software for individuals/smaller clients.
Just got Microsoft copilot at my job, gonna see how that works out for ai assisted tasks. Lately been using perplexity.ai to build out scripts for various sys admin tasks. My powershell is weak, but I know enough about what I want to do to get a rough idea. I like perplexity.ai because it gives me sources I can cross reference. I’ve gotten some garbage suggestions for sure. Scripting that included proprietary systems from Oracle that had no business in a windows powershell script.
Overall I’m worried about regulation and law not keeping up pace with AI and workers rights. There’s gonna be stupid loopholes people exploit or big business will take a slap on the wrist and a fine vs actually adjusting its workforce for the benefit of individuals having jobs. I need to do more research on it, but it concerns me because companies already abused H1-B employment.
There definitely needs to be a balance between pure efficiency and treatment of workers, but here’s something to consider. In 2022 I worked with 2 different contractors for a design project… one did excellent work and apologized for it taking several days as he’d been dealing with a personal crisis at the moment. I had to reassure him that 4 days to do something I had struggled with for 3 months was more than adequate! Total fee, $300.
Now the other guy…
He had excuse after excuse, and had charged me $1,100 so far… and when I finally asked him for the design, he said he hadn’t actually started yet… That’s the guy I fired, if anyone remembers that thread.
(for clarification, the guy that charged $300 was the one I went to in order to get the project done, after the other guy billed me and then did who knows what for a month).
Now, who would you hire? Someone who works fast and produces excellent work, or someone who is slow and doesn’t produce much at all? Someone skilled with AI will be much closer to the former… I know my output has increased dramatically since I started working with AI… In addition to my actual role, I write the technical blog articles for my company… and I produce 5x what our actual writer comes up with, in a fraction of the time. 3 articles in my spare time this morning, all SEO-optimized, 2k+ words, while I’ve been working on CAD models for a new patent on my other screen.
AI isn’t coming for anyone’s job, but a human who knows how to use AI just might… and that’s exactly who companies will hire.
Oh yeah absolutely anyone who doesn’t embrace AI is gonna have a rough time. It’s like people who never thought computers would catch on.
I honestly hope things get less competitive and more abundant due to more productivity with less actual hours and energy expenditure.
I’m an idealist but also a deep cynic. Ive seen the way technological capabilities has saddled people with more work over the years. I’m worried it’s never gonna be enough for some people and the goalposts are going to keep shifting. Instead of the modern metrics staying the same and AI making things easier, I’m just worried it’s gonna invite in more expectations that as a society we aren’t prepared for.
At the end of the day with AI, despite it doing the sub tasks, you’re still engaging with deep mental processes to come to the conclusions of how you want that AI to integrate your ideas. I still think humans have a cap for that and it needs to be respected.
Smart employers understand that they’re paying their employees for the value they deliver, and not to work flat-out for as many seconds as they can force them to. There’s actually a disincentive toward efficiency if a boss has the latter mindset. If I can finish a task in 4 hours that takes a colleague 8 hours, to the same level of quality, then I’m not lazy, I’m better.
I don’t think the world has ever become less competitive in situations like this, it’s basically the next industrial revolution. If that’s any indicator, things will actually speed up. And no, I don’t want that at all… I want more leisure time. But expecting someone to give that to me is folly, I need to make it happen myself.
I think that will happen in some cases, but my hope is that enough opportunity will open up elsewhere that people will have the option to say “no, GFY.” to bosses that expect 100% output 100% of the time… nobody can do that, for anything.
I tried to get AI to write a book for me… and despite giving it literally every advantage I could, it failed utterly miserably. It’s not creative, it doesn’t think things through, and what it did come up with was nonsensical and worthless. That was with the latest GPT-4 model, and a very thorough discussion to establish the task. So yes, you’re 100% right. Until we hit AGI at least, AI is a tool, and any threat that it poses to work is going to be other humans, not AI… the actual thinking has to be done by a human. And the humans who either resist that or don’t bother to learn, are going to be left in the dust.
Because when AGI is achieved, and it’s coming… the only people who will even have a chance of competing are the ones who know how to use it.
Edit for context: I tasked GPT-4 to create a detailed outline for a murder mystery in the style of a very well-known author, someone that the AI is comprehensively familiar with… but what it came up with was vague, the ending made no sense, and lacked any sort of detail. But I liked the basic premise of the story so I’m writing it myself. I’m half done, and as soon as I replace the detective with one that isn’t still under copyright, I intend to publish.
So you seem really well versed in this area. I’m still an AI noob. I understand despite how intuitive they’re trying to make these models, there’s still a level of skill needed there with how to interact with it. Did you learn mostly through trial and error? Or are you using other resources to enhance your skillset?
I need to make the jump in my job to utilizing this stuff more heavily. It’s less of a technical block and more breaking a habit and learning not to reinvent the wheel. Especially in my line of work where you’re always constantly learning new things and how to implement stuff. What took a day of research to understand new syntax to accomplish a task, I can throw into something like chat gpt, make a few tweaks, then the automation runs. I haven’t memorized the script, but at the same time do I have to? It’s a weird thing. I’m so used to attempting to build an encyclopedic knowledge of things to pull from in my head and now there’s AI which is like an extension of that which offloads the memory requirements of my brain lol.
You know thinking about this again. It’s 2024 and I work in IT. I’m still amazed at the lack of computer proficiency in some people I run across. And age is not a factor. Despite their lack of proficiency it’s not a mark against them in the job. So inevitably there are probably gonna be some companies or businesses with a head guy or owner that hates AI or refuses to use it based on resistance to change and life goes on. It’s gonna be interesting that’s for sure to see how it unfolds.
Trial and error as well as watching a lot of YT tutorials… I’ve barely scratched the surface with art, although I do have a pro Midjourney account… most of my work has been with GPT models, going back to GPT-3 with Jasper.
Yep. There’s a word for those types of bosses… idiots. I can respect taking a personal stance on an issue, but if taking that stance threatens the survival of your business and lets your competitors leap ahead? Dumb.
Yep! You can’t fight technological progress, you either embrace it or get flattened by it.
In some cases it could; I would need a team of writers to do what GPT-4 can do in minutes… but the smart move is to keep the team and produce X times as much, or make it X times better.
I could very easily edge into political topics here so I’ll be mindful, but it’s been my experience that trying to control something like that with regulation has never quite worked… people fight back. And if we (I’m generalizing and localizing here…) in the West hobble our own use of AI, that won’t fix the problem, because you think the rest of the world is going to do so too?
New games are going to be amazing… and I presume you’ve seen by now what Sora can do… just imagine in a year once the open-source models are available… the doom-and-gloomers love to point out the downsides, but me? I’m excited about the prospect of being able to render out my favorite books as movies, with whatever actors I choose… and I’ve had ideas to re-cast a Star Wars reboot for many years… not for anything other than my own enjoyment, I have no interest in violating copyright etc by publishing anything.
Once again, I asked the AI to generate an integrated program description for my two custom subliminals. This time I used Claude and attached the various program descriptions and module descriptions as readable pdfs.
I guess that would work if you are more oriented towards emotional feelings. I’m more of a logical person and will probably get picture that looks better.
You have them and are influenced by them whether or not you acknowledge them.
A brief review of human neuroendocrine process and function reveals that a very logical approach to life as a human is to contemplate emotions and to learn how to harness them.
I’m experimenting with my own take on (what is believed to be) the way Q* is implemented; I mainly use GPT4 but I’ll be setting up an API key for Claude 3 soon as well… GPT4 is absolutely not creative compared to me; it utterly failed at a creative writing task. I’m going to see what it can do with the (maybe) Q*-ish approach for the same task.
In preparation for that, I’ve also been generating synthetic training data using a fine-tuned model I created… and wow, it is ridiculously unfiltered… haha… I’d hoped this FT model would be useable as-is, but it’s not… so now I’m taking a more bootstrap approach where I use this FT model to generate the training data for a new FT model, which will hopefully work better.
I love this new service: https://www.callstar.ai/
I just spend time talking via voice clones to chat bots trained to talk like my heroes. Just chatted with Tony Stark about my inventions.
Edit: Argh, sorry, I didn’t know they wanted money for that gimmick. I had a 10 minutes test version without knowing. So I don’t recommend it.