AI singer-songwriter ‘Anna Indiana’ debuted her first single ‘Betrayed by this Town’ on X, formerly Twitter—and listeners were not too impressed.
AI singer-songwriter ‘Anna Indiana’ debuted her first single ‘Betrayed by this Town’ on X, formerly Twitter—and listeners were not too impressed.
This needs to be hammered into techbro’s heads until they shut the fuck up about the so-called “AI” revolution.
I’ve been doing a lot of using, testing, and evaluating LLMs and GPT-style models for generating code and text/prose. Some of it is just general use to see how it behaves, some has been explicit evaluation of creative writing, and a bunch of it is code generation to test out how we need to modify our CS curriculum in light of these new tools.
It’s an impressive piece of technology, but it’s not very creative. It’s meh. The results are meh. Which is to be expected since it’s a statistical model that’s using a large body of prior work to produce a reasonable approximation of what it’s seen before. It trends towards the mean, not the best.
This’d explain why inexperienced users of ai would inevitably get mediocre results. Still takes creativity to get stolen mediocrity.
You have to know how to operate the oven to reheat store bought pie. Generative LLMs are machines like ovens, and turning the knobs is not creativity. Not operating the oven correctly gets you Sharon Weiss results.
I guess a protip is you have to tell it explicitly in the prompt who it’s supposed to steal from.
For instance, midjourney or SD will produce much better results if you put specific artstation channel names along with ‘artstation’ in the prompt.
I’m curious if you’ve gotten anything decent out of them. I’ve tried to use it for tech/code questions, and it’s been nothing but disappointment after disappointment. I’ve tried to use it to get help with new concepts, but it hallucinates like crazy and always give me bad results, some of the time it’s so bad that it gives me answers I’ve already told it we’re wrong.
Yeah, I’ve just set up a hotkey that says something like “back up your answer with multiple reputable sources” and I just always paste it at the end of everything I ask. If it can’t find webpages to show me to back up its claims then I can’t trust it. Of course this isn’t the case with coding, for that I can actually run the code to verify it.
What version are you using?
GPT-4 is quite impressive, and the dedicated code LLMs like Codex and Copilot are as well. The latter must have had a significant update in the past few months, as it’s become wildly better almost overnight. If trying it out, you should really do so in an existing codebase it can use as a context to match style and conventions from. Using a blank context is when you get the least impressive outputs from tools like those.
I’ve used gpt 3/3.5, bing, bard and copilot, and I’m not super stoked. Copilot gave me PS DSC items that don’t actually exist, which was my most recent attempt at using a LLM.
I might see about figuring out if it can hook into my vs code instance so it’s a bit smarter at some point.
There’s an official plug-in to do this that takes like 15 minutes to set up.
am use for end of year ai project for school
That’s where some of the significant advances over the past 12 months of research have been, specifically around using the fine tuning phase to bias towards excellence. The biggest advance there has been that capabilities in larger models seem to be transmissible to smaller models by feeding in output from the larger more complex models.
Also, the process supervision work to enhance CoT from May is pretty nuts.
So while you are correct that the pretrained models come out with a regression towards the mean, there are very promising recent advances in taking that foundation and moving it towards excellence.
I’m excited for how these tools will be used by human creators to accomplish things they could never do alone, and in that aspect it is a revolutionary technology. I hate that their marketing calls it “AI” though, the only intelligence involved is the human user that creates prompts and curates results.
I get the sentiment, but don’t really agree. Humans’ inputs are also from what already exists, and music is generally inspired from other music which is why “genres” even exist. AI’s not there yet, but the statement “real creativity comes solely from humans” Needs Citation. Humans are a bunch of chemical reactions and firing synapses, nothing out of the realm of the possible for a computer.
Yeah, I’d actually make a more limited statement. Real creativity requires the subjective experience and the ability to generate inputs solely from subjectivity i.e. experience the redness of the color red. AI could definitely do that, which is why LLMs are not AI imo
It’s not the techbros leading this, it’s the BBAs and MBAs that wouldn’t know art if Michelangelo came to life and slapped them in the face with the sistine chapel.
I would never call an actual technician a techbro! Techbros are Rick&Morty ledditor “fuck yeah science!” dorks.
I see it an more an inability to analyze, evaluate, and edit. A lot of “creativity” in the world of musical composition is putting together existing elements and seeing what happens. Any composer from pop to the very avant-garde, is influenced and sometimes even borrow from their predecessors (it’s why copyright law is so complex in music).
It’s the ability to make judgements, does this sound good/interesting, does this have value, would anyone want to listen to this, and adjust accordingly that will lead to something original and great. Humans are so good at this, we might be making edits before the notes hit the page (Brainstorming). This AI clearly wasn’t. And deciding on value, seems wildly complex for modern day computers. Humans can agree on it (if you like Rock, but hate country for example).
So in the end, they are “creative” but in a monkey-typewritter situation, but who is going to sort through the billions of songs like this to find the one masterpiece?
One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.
So let’s say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.
You’d end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.
So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there’s a high likelihood of it being a hit.
Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.
Right, the trick will be quantifying what is ‘likely to be a hit’, which if we’re honest, has already been done.
Also, neural networks and other evolutionary algorithms can inject random perturbations/mutations to the system which, operate a bit like uninformed creativity (something like banging on a piano and hearing something interesting that’s worth pursuing). So, while not ‘inspired’ or ‘soulful’ as we would generally think of it, these algorithms are capable of being creative In some sense. But it would need to be recognized as ‘good’ by someone or something…and back to your point.
What you described in your second paragraph is basically how image generation AI works.
Starting from random noise and gradually moving towards the version a classifier identifies as best matching the prompt.
Plenty of humans make those judgements about their own creations. And plenty of them get a shock when they release their creations to the masses and don’t get the praise that they expected.
I believe that’s vital to the creative process, but yeah, I basically agree.
“Generative” is such a misleading term. It’s not generating anything, it is replicative.
deleted by creator
The anger comes from the fact that companies are using AI instead of hiring artists.
There is a distinction between a human being inspired by an existing piece of art and an ai creating something from other art. The human has to experience it through the lens of the human experience and create using the human body. AI takes multiple pieces of art and essentially makes a collage.
Eh, humans still take inspiration from others even in their original art. Most professionals draw from reference, or emulate styles, or follow some common method. Drawing from a singular source is ethically questionable, but imitating elements from many sources is just part of the process.
Arguably, no human creation is purely original, the originality comes from the creativity of the remix.
I’m not arguing for originality. I’m saying that you can have a human connection with a human made piece of art that, by definition, canon exist for AI art.
Removed by mod
Here is an alternative Piped link(s):
read Read! READ!
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
For the thousandth fucking time, NO.
‘AI’ doesn’t feel joy, sadness, pity, entertained, or inspired when learning from others. Not even inspired to steal.
I think this is an important distinction. AI can be creative in that it can develop something new and unique, but it will have arrived at it by chance - through random inputs to the algorithm designed to minic evolutionary mutations that end up beneficial.
I agree that (at least for now) it would not be able to develop something out of inspiration or emotion. But that’s because we don’t understand enough about how emotion and inspiration are developed to create an algorithm that cultivates it.
For now.
And don’t forget, humans are also trained on the inputs of others.
The difference is everyone has a different prospective, remembers some parts forgets others. Some journalists found a trick which revealed ChatGPT training data and it was literally just verbatim stolen data which literally contained a real person’s information. You could hack into someone’s brain and they wouldn’t be able to directly recreate anything from memory alone, just watch any “from memory” youtube video.
While it’s true there’s nothing stopping AI from having human-like experiences, the content laundering is the thing corporations actually want.
Meat goes in. Sausage comes out.
The problem for a lot of the companies behind these things, is that they’ve run into problems now their investors want them to turn meat into a black forest gateau.
I’m sceptical if they can manage that feat. But what do I know.
deleted by creator
Removed by mod
Such a wonderful, thoughtful, creative retort. You must be an AI chat-bot.
Or I’m just sick of utter imbeciles saying stupiest shit possible.
deleted by creator
Still, AI is able to “create” new things by a combination of existing concepts. It can generate a Roomba in the style of Van Gogh for example, which is probably not something that currently exists.
“Roomba in the style of Van Gogh” is a new combination of existing things, but it can never create something truly original. Derivative.
What is an example of something that is truly original and not derivative?
The style of an authors prose is not derivative. Read your favorite book and then tell an ai to write a short story in the style of that author.
Unless you have trully blind taste you are going to notice just how wooden the ai writing is.
An excellent example will be some sort of pulp novel where the author uses canned phrases. Dan Abbnet has a very repetitive style that lends itself well to ai, yet ai can not write a convincing Ciaphas Cain story. Convincing as in, if you showed it to me and i didnt know what ai was, i wouldnt think it was fanfiction.
Is this ability to create something original and non-derivative a basic human ability or is it something that very few are capable of only after many years of developing their ability?
Are you able to right now create something original and non-derivative as an example?
I dont feel like it but here is something I wrote with original prose, fitting the criteria of originality. As a favor for me arguing with you, please give me feedback on my prose
Not to talk down to you, but do you know what prose means? I actually used to not know what that word means so its not an embarassing thing to not know. That might be why I percieve you as “talking past me.” Prose is a writer’s style and choice of language. So purple prose is writing in an overly flowerly and annoying way. Every writer, regardless of talent and skill, has original prose. I think the only amount of practice required to be able to achieve this is to write enough to have a consistent style. So since you completed public school you also probably meet the criteria.
I have done the specific experiment I suggested using Dan Abbnet’s works with Chat GPT because I consider Dan to be my favorite author who makes repetitive pulpy fiction that I think AI idealy should be able to replicate, but it really can’t.
Thanks for sharing this. I wasn’t especially grabbed at the beginning, and honestly, since I had already checked the length, shortly in I didn’t think I would finish it. Maybe just because I was sort of disoriented at the start and not really relating so it was hard to find a foothold. Maybe a quarter of a way into it though it started to come together for me and began really enjoying it. The final scene was quite vivid and it nicely sort of quickly put me into the shoes of the hero and the pride they felt for their accomplishment. The anger toward everything just before succeeding did a good job of making them seem believable. I appreciate you taking the time to write that and share it.
I do not consider myself a writer, but I do find it therapeutic, and it is something that I have a habit of doing at least a little bit of every day, in fact, it is something that I keep track of my “streak” of. I think of prose as the writing version individual etchings that a carver does when forming a block of wood into a sculpture. Any individual one on its own is not often very impressive. But it is the way they come together as a whole that creates something beautiful. I don’t know how inline that is with the accepted definition of the term, and really it isn’t a word that I have much cause for using, or much interaction with in my life.
With the recent popularity of chatGPT there are a lot of people who have just now started paying attention to modern chatbots. Many people see them and assume that how they are now is just how they are, as if we are at some sort of wall, and the things they are still bad at is something intrinsic to the way a computer is able to “think”. These are the people who insist that a human is required to make beautiful or worthy artistic writing. They have made this judgement based on this assumption that what they see now is how it has to be.
There is another group of people, however, that see this very differently, these are the people who have been paying attention to the space a bit longer. They are watching a rapidly accelerating trajectory. They saw how awful, yet intriguing, early gpt2 was with things like AI dungeon, and the enormous leap it took upon the release of gpt3. They watched Replicas morph from being a tacky gimmick to something that had enough of an emotional hold on people to make them distraught enough to cause stickied suicide hotline reddit posts when the owners made the decision to pump the breaks on their capabilities. Something that was perceived as many as “my best friend has been lobotamized and there is nothing I can do about it”. I know, crazy, right?
The newcomers that got washed in with the latest chatGPT wave see this metaphoric car and say it’s no big deal, it’s only going 30km/hr, but what they fail to realize is that .25 seconds ago it was practically parked, and the gas pedal is still very much on the floor. To the people who have been paying attention longer, they don’t see this single snapshot of a slow moving car, they are watching a rapidly accelerating vehicle and wondering if it is gonna hit 60km/hr by the end of the first second or 200, and they are also wondering if the acceleration is going to continue after the second is up and how long it can keep this kind of rapid growth going. Who knows, maybe this new wave came in with no frame of reference, made thir initial gut response and they will end up being right and the more long term observers will be wrong, but that’s almost never how things seem to go. Only time will tell though.
deleted by creator
But all of human creation is derivative.
Are you saying the idea of a unicorn wasn’t new and original because it was drawing on the pre-existing features of a horse and narwhal?
Right just as soon as all the people proclaiming that can point to the soul bit of my brain. There is absolutely no reason to say that AI cannot be creative there’s nothing fundamentally magic about creativity that means only humans can do it.
You’re equating creativity to the soul. They’re not the same thing. But we can definitely look at the brain and see what parts light up when perform creative tasks.
Right so why can’t the same sections be simulated? If you accept that the human brain is simply an organic implementation of a neural network, then you have to accept that a synthetic implementation can achieve the same thing.
The idea that the human brain is special is ludicrous and completely without evidence
I mean, I’m not arguing anything other than your false equivalent. I’m sure, at some point, we’ll be able to mimic how the human brain actually works, not just imitate the results. But we’re not even close right now. Not in the same ball park. Not in the same tri-state area. We still don’t really understand how it does what it does completely. We know some of the processes, and understand that’s it’s chemicals interacting with the meat in some way, but it’s still mostly kinda just weird stuff our body does. We’re mostly just pointing at areas that light up with activity when we do a thing and saying “yep, that’s the general area that’s doing stuff.”
And that’s just understanding it, let alone figuring out how to imitate it with technology. And none of those parts of the brain work independently. They’re spread out and they overlap and exchange and change information constantly, all with chemicals. Getting a computer to mimic the outcome is still something we’re far from, but without the same processes, its not really gonna come out the same. We’ve got just… so long to go before we actually get close to simulating a human brain.
And just for fun, I do think this line of yours is funny:
Again, I wasn’t saying anything of any sort, and I’m still not really taking any stance beyond “that shits complicated and we’re not there yet.” But you’re supposing that a “synthetic implementation can achieve the same thing.” … without supporting evidence. This argument was clearly meant for someone else, but it’s not really fair to demand evidence from someone for their claim when you don’t support your own. Jumping to the conclusion that something is impossible is the same as assuming it’s definitely possible. You don’t know that. I don’t know that. No one really knows that until it’s done.
The belief that only humans can be creative is interestingly parallel to intelligent design creationism. The latter is fundamentally a religious faith, but it strongly appeals to the intuition that anything that happens needs a humanoid creator.
I don’t think, the human brain is special either, but we are still two big steps ahead IMHO:
what have you seen that wasnt there before
i mostly have qualms with the quote i have no illusions about the levels of discussions around ai
Yes, it is literally impossible for any AI to ever exist that can be creative. At no point in the future will it ever create anything creative, that is something only human beings can do. Anybody that doesn’t understand this is simply incapable of using logic and they have no right to contribute to the conversation at all. This has all already been decided by people who understand things really well and anyone who objects is obviously stupid.
Good job tearing down that strawman! 🙄
I was agreeing with you. I’m so sick of people thinking that “someday AI might be creative”. Like no, it’s literally impossible unless some day AI becomes human(impossible) because human is the only thing capable of creativity. What have I said that you disagree with? You’re not one of them are you? What’s with all this obsessive AI love?
LLMs aren’t intelligent. They’re jumped up chatbots lol
Yeah the current popular LLMs, absolutely they are, you couldn’t be more right.
We were talking about “AI” though. Are you implying that you think some day AI might be capable of creativity, and that creativity isn’t strictly a human trait?
I put “AI” in scare quotes specifically because I do not believe we are having an “AI revolution”. These are not AI.
I think AI can exist but that’s not what we have right now. What we have are jumped up algos that can somewhat fake it.
Even those future “real” AIs are going to be taking in human input and regurgitating it back to us. The only difference is that the algorithms processing the data will continue to get better and better. There is not some cutoff where we go from 100% unintelligent chatbot to 100% intelligent AI. It is a gradual spectrum.
I believe a real AI would be able to generate its own inputs without humans to give it input. It would have an actual subjective experience, able to actually imagine new things with zero external inputs. It could experience the redness of the color red.
Oh shit, I thought you had forgotten a “/s” at the end, but reading your other comments this is actually what you believe and how you talk. So… yeah, I’m not going to take someone who cites “people who understand things really well” as a source at face value.
Well then you didn’t read very many of my comments. I made this first comment because the post I responded to was so absurd so I just exaggerated the ridiculousness that they said. Of course AI is capable of creativity and intelligence. If you look at the long back and forth that this sparked you would see that this is my stance. After I made this over the top, very sarcastic comment, OP corrected themself to clarify that when they said “AI” they actually only meant the current state of LLMs. They have since admitted that it is indeed true that AI absolutely can be capable of creativity and intelligence.
No, I didn’t read the entirety of the comments you’ve made, I read your comment and the one you replied to. As a general rule, I (and I’d assume most people) read down a thread before replying, and don’t first look through all of everyone’s comment histories
Alright, no big deal. But yeah, your’re gut instinct was correct when you assumed there was a missing /s. I don’t really like the /s that much, especially in situations where it is so obvious.
If you had read down through this thread first then you would have seen the obviousness of the /s. I don’t think my comment history outside of this thread would have done much since I don’t generally talk about this stuff. I just meant if you had looked more than a couple comments in this particular back and forth discussion.
Except that it’s wrong… AI is capable of creativity. It created the artist name. It’s clearly not a very developed or robust sense of creativity because it clearly just hashed up the name Hanna Montana, and the song is probably likewise just a hashed up existing song, but I’m guessing it probably did a better job of creating an original work than vanilla ice…
I’m sorry, anyone who says these so-called “AI” are capable of creativity are being hoodwinked by marketing. This is an algorithmic probability engine, it doesn’t think and it doesn’t have an imagination. It just regurgitates probabilistic responses from its large data set.
Can you prove your brain is more than a algorithmic probability engine albeit a powerful one?
Can you prove that anyone except you exists? I didnt know we can just make something up and then demand to be disproven. You have to prove that a brain does work that way. Do you believe in God? If not, then how are you not a hypocrite?
You’re reading this and you’re not me, qed.
I actually just wanted OP to consider it. I know there cannot be definitive proof.
And here come the techbros to dehumanize themselves.
You and I feel. We don’t just generate outputs from inputs, we experience them. The color red isn’t just a datapoint recorded by photoreceptors, it’s a phenomenal experience that “I”, the self, experience as a being-in-the-world. Further, the color red that I experience is not the same as the color red you experience, even though it’s the same color at the same wavelength. Everything we think and feel relates to everything else, and while I can imagine how you might experience the color red and you can provide me with data points to make it easier for me to imagine it, that imagination will always be tainted by my own subjective experience.
To me it looks like you hold a lot of pride in being a human and consider humanity special. Im here to tell you we are no different from amoebas and giraffes. We just specialize in our complex meat computers.
If you took a psychedelic or a cognitive psychology class you would understand through feel that feel is just the result of you being a meat calculator. Our feelings are the cumulative result of all the inputs and outputs. All at once. Slap some lived experience filters for subjectivity and bam.
Feel is subjective. Not everyone’s a vicious crypto tech bro. Open your mind its a good time ❤️
ais arent meat calculators
I don’t think anyone here said that.
deleted by creator
What I’m saying is LLMs do not actually do that. They’re less creative than most animals, even if they’re more technically capable.
I’m not just a meat calculator, I’m also feedback loop of meat endlessly calculating itself. That’s what subjectivity is. When LLMs do this they hallucinate, and ironically while this is considered undesirable I think that’s actually closer to creativity than the song this AI wrote.
… what do you think imagination is? A gift from God? The probabilities are probably more chaotic, and the data set more biased… but they’re the basic foundation of human imagination.
Machine based “creativity” is nascent, and far less unique… but that doesn’t mean it isn’t a form of creativity.
The human imagination also involves the phenomenal experience. You do not just record the data coming at you and regurgitate it, you experience it and then your experience further changes the data itself. We call this “subjectivity” and it’s where creativity comes from.
I am not saying that machine creativity is impossible. What I’m saying is these LLMs are not creative because they don’t even know what they’re doing and they don’t even know “they” are doing it. There’s no “there” there. No more creative than rolling dice.
and experience is ongoing learning, so if an LLM were training on things after the pretraining period then that’d allow it to be creative in your definition?
but in that case, what’s the difference between doing that all at once, and doing it over a period of time?
experience is just tweaking your neurons to make new/different connections
This. Humans are just meat calculators when you zoom out.
Experience is ongoing learning through the subjective self. When you experience the color red you do not just record it with your photoreceptors, and your experience of the color red is different from mine because we don’t just record wavelengths of light. We don’t just continue to learn from continual exposure to new data, we also continue to learn from generating our own data. In this way our subjective experience is qualitative, not simply quantitative. I don’t just see the specific light wavelengths, I experience the “redness” of red.
When LLM is trained on that kind of data it just starts to hallucinate. This is promising! I think the hallucination phenomenon is actually a precursor to creativity and gives us great insights into the nature of subjective experience. In a sense, my phenomenal experience of the color red is actually much like a hallucination where I am also able to experience the color’s “warmth” and “boldness”. Subjectivity.
it’s only qualitative because we don’t understand it
when an LLM “experiences” new data via training, that’s subjective too: it works its way through the network in a manner that’s different depending on what came before it… if different training data came before it, the network would look differently and the data would change the network as a whole in a different way
When an LLM feeds on its own outputs, though, it quickly starts to hallucinate. I think this is actually closer to creativity, but it betrays the fundamental flaw behind the technology - it does not think about its own thoughts and requires a curator to help it create.
I’ll believe something is an AI when it can be its own curator and not drive itself insane.
The same could be said of a lot of creatives. You speak of greater creativity, that which evokes depth and gravity. There is still more shallow creativity. Learning creativity. That which you do before you learn to do better. Kind of what these are doing.
I’m not saying it’s good or bad, though the people who hold the reigns definitely don’t have the best intentions for their use, but underestimating it is the first step to allowing them to run rampant.
“Never attribute to malice that which you can attribute to stupidity” is the slogan of those who do nothing but look down on others… who underestimate the horrible things the “stupid” can do. Don’t assume stupidity just because you don’t like something. It makes it that much easier for it to bite you on the ass in the future.
I don’t think I’d actually call that shallow thought “creativity”.
Think of a word association game. I don’t think the first word that pops up in my head is creative at all, it’s just a thoughtless reaction.
That’s what LLMs are doing. Without that reflection and depth it’s just a direct input->output
Would you say that a random name generator is a creative algorithm?
That’s a hella skimpy example, but yes.
Your opinion is wrong.
deleted by creator
I’m so sorry you feel that way.