Home ยป Forum ยป Story Discussion and Feedback

Forum: Story Discussion and Feedback

On Bullsh*t

Fick Suck ๐Ÿšซ

In 2005, Prof. Harry Frankfurt wrote a brief treatise called "On Bullsh*t" that addressed the wave of derivative writing, punditry, and conversation that was swamping the public sphere. His conclusion was that all this bullshit led to a true diminishment of the creative and original work that everyone else was doing. A.I. is just the newest iteration. I recommend the dead tree text as we mount the ramparts and take aim at the accumulated generated stories that we are confronting.

garymrssn ๐Ÿšซ

@Fick Suck

I recommend the dead tree text as we mount the ramparts and take aim at the accumulated generated stories that we are confronting.

Hear, Hear!

Automation is a marvelous concept for the commercial purpose of reproduction of products.
It cannot however create originals. It can only make copies.
AI is just another form of automation.

Gary

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@garymrssn

Automation is a marvelous concept for the commercial purpose of reproduction of products.
It cannot however create originals.

That seems to me to be an Intelligent Design versus Evolution argument.

AJ

irvmull ๐Ÿšซ
Updated:

@Fick Suck

I just asked Google AI "where are google street views banned".

Africa:

Several countries in Africa lack coverage, including French Guiana, Guyana, Paraguay, and Suriname.

Umm, yeah. Right. Funny thing, I've visited those countries, but I've never been to Africa.
When did they move? Seems like something that would have made the six o'clock news.

irvmull ๐Ÿšซ
Updated:

@Fick Suck

AI can "create originals" - but they're original BS.

On another forum, someone asked the usual CRS question, just as we do here, giving some details of a story he read a few years ago that was posted on that forum.

AI replied with the name of the story, the author, some plot details, and a note that it was so popular that it was made into a book series.

Thing is - there was never a story by that name either on the forum website nor could Google find a story or a book by that name anywhere.

The author cited is an actual author, but he writes nothing in the genre of the story requested.

And that author was never a member of the forum, so couldn't have posted anything there.

All just imaginary. When AI was asked to provide a link to the story, it replied with the equivalent of "I"m sorry, Dave. I'm afraid I can't do that."

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@irvmull

The interesting thing here is that it becomes an argument for the AI actually being creative. It's not the only one, either. There was a case recently where a someone used an AI to generate a list of recommended books for summer reading. Some of them turned out not to exist. The consensus, in general, was that the recommended (but non-existent) books sounded very interesting and someone ought to write them.

Creative? Or not? In my opinion, the answer quickly winds up in the land of philosophy, because you need a decent definition of what 'creativity' means in order to answer it.

Replies:   garymrssn  BlacKnight
garymrssn ๐Ÿšซ
Updated:

@Grey Wolf

Creative? Or not? In my opinion, the answer quickly winds up in the land of philosophy, because you need a decent definition of what 'creativity' means in order to answer it.

I believe we are still a long way from the land of philosophy. Even an amoeba can operate itself without human help.

A CNC lathe can produce unique items based on its programming and the materials provided.
A computer controlled paint mixer can produce unique colors based on its programming and the materials provided.
An AI can produce unique combinations of words Just like the other machines, based on its programming and the materials provided.

The engineer, the interior decorator, and the writer may be artist but the AI is still a machine. It copies to a specification. Its gears and levers are electrical components and circuits.

As Arthur C. Clarke stated:

"Any sufficiently advanced technology is indistinguishable from magic."

Appearing as magic does not make it magic nor creative. A human has to run the machine.

BlacKnight ๐Ÿšซ

@Grey Wolf

It's not being creative. It's just producing random mashups of things humans have created.

If you take two books, and then repeatedly flip a coin to determine which book you're going to copy the next word out of, is the resulting incoherent nonsense "creative"?

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@BlacKnight

Again, we're just dancing around the problem of defining what 'creative' means.

In your specific example, I would argue that it is not, based on my understanding of 'creative.' But that's because it's an exaggerated example, not because it's a meaningful point.

Consider the oft-stated comment 'If one gives an infinite number of monkeys an infinite number of typewriters, one of them will write Shakespeare.' Is that monkey 'creative?' Again, I would argue it is not, because the creativity is inherent in the probability of infinity.

On the other hand, suppose we have a black box which reliably writes new, never-before-written Shakespeare-grade material on request. Is that black box 'creative?' Does it matter whether the black box (of arbitrary size) contains a human being, an animal, a disembodied brain, or a computer? If so, why?

You say 'random mashups of things humans have created.' I can argue (likely incorrectly, but still) that every work of fiction which does not coin new words (and, perhaps, even ones that do) is a 'random mashup of things humans have created.' Humans created the words; authors mash them up. Joseph Campbell makes a moderately persuasive case that there are only a few stories, retold in all sorts of interesting permutations but still, fundamentally, the same story.

Part of the problem is 'random.' Modern LLMs are about as 'random' as many people are when stringing sentences together, and potentially less so. They're not monkeys banging on typewriters.

One step back: is 'West Side Story' creative, or a mashup of Shakespeare? What about 'R+J', which uses Shakespeare's dialogue but is visually extremely different? Creative, or not?

Back to my black box. If one postulates a future AI that reliably emits Shakespeare-grade stories, how exactly does one argue that it is not 'creative?' Oh, you can ascribe the creativity to the programmers of the AI, but none of them can write Shakespeare-grade stories, so where did the 'creativity' come from? Or you can ascribe it to the source material being combined, but that's like saying Degas wasn't creative because he learned everything from studying the paintings of other painters. And, of course, it's also like saying that humans aren't creative. The creativity comes from either God or evolution, whichever 'programmed' humans.

Next, a big step back. There's an interesting intersection of physics and philosophy that holds that the universe is deterministic and predestined (from the instant of 'creation' - presumably the Big Bang), and that free will is an illusion. If so, Shakespeare himself is no more than a biological machine who wrote exactly what he was 'designed' by nature to write, no more and no less. His creativity exists as a byproduct of the exact conditions of the Big Bang.

All of this just chases the fundamental question of what is actually meant by 'creativity,' which was my original point. People use the word, but I suspect it does not mean what they think it means. Without a solid definition, saying something is, or is not, 'creative' is about as useful as saying it's 'pretty' or 'cute' or 'attractive.' It's an eye-of-the-beholder view, not something really subject to factual analysis.

If I, say, pull up the Merriam-Webster definition, I get 'having the quality of something created rather than imitated : imaginative.' If an AI produces something that does not, on a word-for-word level, imitate something, it is 'creative' by that definition, like it or not. It might, in fact, be based on its inputs, but so is 'West Side Story'.

I don't find it the most useful definition, but it certainly doesn't preclude software from being creative. And it will take a lot of philosophy to make either the argument that 'creativity' is either inherently biological or is not, especially when philosophy already considers there to be an open question as to whether any human, anywhere, at any time, has ever been 'creative' at all.

But, if one assumes, for the purpose of argument, that at least some humans are 'creative,' does that say anything about whether machines can, or cannot, be creative? That isn't at all clear to me. We have a reasonable idea of how and why LLMs generate the output they produce (though it is increasingly clear that it is a 'reasonable' idea, even for the top experts in the field, not an authoritative understanding), but we don't really understand how humans generate the output they produce, so there's no prima facia case that we don't produce things in the same way, but using biological computation rather than technological computation.

garymrssn ๐Ÿšซ

@Grey Wolf

I stand corrected. We are now in philosophical territory.

That said, could it be that we are confusing creative with artistic?

awnlee jawking ๐Ÿšซ

@Grey Wolf

so there's no prima facia case that we don't produce things in the same way

Rather obviously, there's the case you made towards the start of your post.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

If you mean infinite monkeys, that's the difference between purely random means and 'programmed means.' If human beings are, in the end, entirely mechanical, and the universe is without free will (as noted, a position currently embraced by a number of people in both the physics and philosophy communites), then there is no practical difference between human creativity and AI creativity (whatever 'creativity' means).

Note that the 'free will' argument does not necessarily remove randomness. There is a subset of the community holding that at quantum randomness is a real and non-predetermined thing, but that quantum randomness does not introduce free will, since it is entirely and uncontrollably random.

Mind you, I am not a subscriber to the viewpoint that we / the universe is mechanical and predetermined - but, presumably, I may be predetermined to reject that viewpoint, which would make my rejection of it something of no consequence :)

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Grey Wolf

No, I was thinking of the creation of new words. Okay, it's not right at the top, for which I apologise.

Allegedly DeepSeek's LLM's finest level of granularity is the word, so it can never create new words. When the level of granularity is the syllable, it's not so clear. Since the preceding and following syllables are determined statistically, I suspect those LLM's won't be able to create new words either. But for humans, it's easy. Take a root and build on it according to unwritten/intuitive? rules regarding tenses, prefixes, suffixes etc until you get the exact meaning you want and occasionally you'll come up with a word not in a dictionary.

AJ

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@awnlee jawking

There are now AIs that can (and do) create new words based on existing ones. It's even a subject of academic study - see e.g. https://arxiv.org/abs/2502.14900

There are also well-documented cases where two AIs interacting with each other created their own languages. This is perhaps not the definitive article, but it's interesting: Unraveling The Curious Mystery Of Two Different AI Models Suddenly Forming A New Language Of Their Very Own. Note that, in that article, it also mentions AI-created words being formed as part of 'hallucinations.'

awnlee jawking ๐Ÿšซ
Updated:

@Grey Wolf

With the benefit of further consideration, some AIs seem to be creative in some circumstances.

Those impinging upon the legal profession have been shown to invent non-existent cases to support claims of precedent. And those who ask AI to critique their stories always seem to be told their writing is peer to a Great American Novel even though their prose seems to me clunky and prosaic.

Both those instances are cases where the AI is presumably following an instruction to please the user. But can AIs produce such creativity when the reins are off?

AJ

Replies:   garymrssn
garymrssn ๐Ÿšซ
Updated:

@awnlee jawking

With the benefit of further consideration, some AIs seem to be creative in some circumstances.

I agree, and "seem to be" is the important distinction.

There was an antique paint can shaker in the back of my late uncles hardware store. Observing the area around that machine, it would "seem to be" hallucinating Jackson Pollock.

Of course hallucinating is the wrong word. Hallucinate is a term specifically related to human psychology. It has since ~1995 been cooped as an anthropomorphism by those who are promoting AI in an attempt to humanize their product.* It also obfuscates the distinction between creation and production.**

**They should have used a more common and correct term like malfunction or fucked up.

*https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ
Updated:

@garymrssn

This is, again, related to the question of what 'creativity' means. The paint shaker is a (semi-)random mechanical process. It 'knows' nothing about art, unlike Jackson Pollock.

On the other hand, a generative art-making AI 'knows' a great deal about art. It has studied an enormous number of artistic works, after all.

There are examples of AI-generated artworks winning art contests, and the prize being taken away once it was known that the work was created by an AI. To my way of thinking, that means either that the judges failed in judging 'creativity' (if the AI is incapable of creativity), or correctly judged 'creativity' but then took a knee-jerk human-centric approach to the term after finding out the creator was not human.

In a way, we're looping around to 'The Imitation Game' aka the Turing Test. While nominally obsolete in 2025, the question posed is not obsolete (again, to my way of thinking): if one posits a black box that might contain a human being or a program, and one is incapable of determining which is within the box based solely on interacting with the entity inside the box, is it meaningful to say that, if the entity is indeed a program, it is not 'human' in some essential way?

Or is there some fundamental way in which only biological organisms can be 'human?' If so, why?

Mind you: I do not believe any extant AI in 2025 has achieved sentience or should be considered human. But that does not necessarily mean they are incapable of being creative. Chimpanzees, dogs, and elephants (among other specials) may or may not be sentient (depending on the definition), and they are not 'human'; they are, however, arguably capable of creativity (again, depending on the definition).

Replies:   garymrssn
garymrssn ๐Ÿšซ

@Grey Wolf

This is, again, related to the question of what 'creativity' means.

Yes, I agree that is the fundamental question. I believe the outstanding assumptions related to that question are at the present time great enough to choke Occams' Razor.
The answer should be resolved in time but then, given human nature, the problem will be whether the answer is believed.

That old paint shakers motion wasn't random at all. That's the opinion of an old retired millwright. ;)

Gary

julka ๐Ÿšซ

@Grey Wolf

You say 'random mashups of things humans have created.' I can argue (likely incorrectly, but still) that every work of fiction which does not coin new words (and, perhaps, even ones that do) is a 'random mashup of things humans have created.' Humans created the words; authors mash them up.

You don't clarify why an author mashing together words qualifies as "random" - if the author is intentional about what word comes next, that seems about as far away from "random" as you can meaningfully get. For example, there's nothing random about this sentence I'm writing - each word and symbol is picked with deliberate care.

Modern LLMs are about as 'random' as many people are when stringing sentences together, and potentially less so.

Modern LLMs literally have a dial to twiddle that configures the randomness of the output - a lower temperature means the output is more deterministic, and a higher temperature means the output is less deterministic, and more random. An LLM has no intention behind what it produces - if you give it the same input, any deviation between two outputs is the result of a configurable amount of randomness.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@julka

An LLM has no intention behind what it produces - if you give it the same input, any deviation between two outputs is the result of a configurable amount of randomness.

A number of LLMs will actually display their decision-making process, which (at minimum) is a very strong proxy for 'intention.' That it is not human intention is obvious, but that doesn't mean it's not an intention, by a reasonable definition of the word.

While one can turn up or down the randomness of the output, in general an LLM picks every token it outputs with deliberate care, thus matching your claim in the first paragraph.

Again, I'm not arguing that any extant AI is sentient or has human-level 'intention.' But the discussion seems to presume that a lack of human-level intention/creativity/etc equals a lack of any intention/creativity/etc.

Dogs, for instance, would beg to differ. They are certainly not 'human-level', but they have intention and creativity. It is not unreasonable to claim that an LLM - which can explain its decision-making process and the goals it is seeking to achieve at every step along the way - has a much stronger intentionality than a dog. At least in my experience, dogs often fail at maintaining a consistent plan to achieve their goals, yet it would be very hard to argue that they do not have goals or make plans.

I will also say that many authors say their output varies wildly based on environmental factors (time of day, hunger, sleep, etc). Are those 'intentional,' or are they 'random'? Even if every word is carefully chosen in the context of the author's intent, the entire work may be different if the author, say, starts writing at 8am every day versus starting writing at 8pm every day.

Or, try the following experiment: write a non-trivial reply to this comment, or any comment on the forums. Delete it. Write it again. Delete it again. Write it a third time. Each time, maintain the intention to write the same reply.

Do you get the same text each time? I certainly don't, even if I intend to recreate it. Random factors! Are those random factors a feature, or a bug? Or both?

Really, at base, we're endlessly looping around to a fundamental question no one can answer: are human beings enormously complex biochemical machines that intake knowledge, store it, and then emit outputs based on the knowledge they take in, or are they possessed of 'free will' which exists outside of the knowledge they take in and the biochemical means of processing it. If the first, we are merely enormously complicated, rather buggy LLMs, with no more 'creativity' or 'intention' than they have. If the second, then we are 'more,' but the way we are 'more' is highly ill-defined and very poorly understood.

And, as noted, there is a very active community within physics and philosophy that staunchly argue the first point - that 'free will' is entirely an illusion and that the entire universe was predestined at the time of 'creation' (e.g. the big bang). All 'creativity' and 'intentionality' along the way is merely an illusion.

I find that depressing, but presumably I am predestined to feel that way, and my finding it depressing has no impact whatsoever as to whether it is true.

In some ways, this is an unexpected side benefit of LLMs, even at their current rather limited state of development. They do a pretty interesting job of holding up a mirror to humanity and provoking a better discussion of what things like 'creativity' and 'intention' actually mean.

Humanity, thus far, is doing a lousy job of answering those questions. But, maybe we were predestined to do a lousy job if, in the end, they were nonsense words all along.

Replies:   julka
julka ๐Ÿšซ

@Grey Wolf

Again, I'm not arguing that any extant AI is sentient or has human-level 'intention.' But the discussion seems to presume that a lack of human-level intention/creativity/etc equals a lack of any intention/creativity/etc.

That is not a claim that I made and you're putting words in my mouth - at no point did I qualify intention as needing to be "human-level"; dogs absolutely display intention and creativity, but mentioning that doesn't mean that LLMs do the same. The fact that they can describe a "decision-making process", a phrase which I am quite deliberately putting in quotes here, is meaningless because at no point have we asserted that the LLM is making its own decisions, as opposed to following the (unquestionably large) script defined by the inputs and weighting it was fed.

You can ask what the difference is between human experience and an LLm's inputs-and-weighting, but the answer is going to be qualia.

Or, try the following experiment: write a non-trivial reply to this comment, or any comment on the forums. Delete it. Write it again. Delete it again. Write it a third time. Each time, maintain the intention to write the same reply.

So what? I don't see how this is some gotcha about humans having random output; doing something multiple times and getting different output is what happens when you practice something and get better at it. I write a result and assess it; the next time I try to write it, the input is not the same because part of my previous results are encoded in it. That's called practice - we're not talking about the game, here. That Iverson reference didn't show up until the second iteration, by the way, but I would have been ashamed if it had taken until the third. Not a game! We talkin' about practice.

Really, at base, we're endlessly looping around to a fundamental question no one can answer: are human beings enormously complex biochemical machines that intake knowledge, store it, and then emit outputs based on the knowledge they take in, or are they possessed of 'free will' which exists outside of the knowledge they take in and the biochemical means of processing it. If the first, we are merely enormously complicated, rather buggy LLMs, with no more 'creativity' or 'intention' than they have. If the second, then we are 'more,' but the way we are 'more' is highly ill-defined and very poorly understood.

eeeeh, no, hang on. An LLM is a very specific thing and it's meaningless to say "humans are enormously complicated, rather buggy LLMs" unless you take a giant toke right beforehand; just like it was meaningless to say "humans are enormously complicated, rather buggy computers" in the 1990s. There's a lot of things that humans do which LLMs don't, and a lot of things that humans are good at that computers aren't, and unless you smoke me up with some dank I'm not super interested in the thoughts you have when you're high. You haven't established that your two poles here (complex machines or possessing of free will) are mutually exclusive. LLMs, thus far, have not demonstrated free will.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@julka

it's meaningless to say "humans are enormously complicated, rather buggy LLMs"

Meaningless to whom, exactly? It's entirely meaningful, if you take the position that the universe is one big, entirely predetermined machine with no free will or true randomness.

There's a lot of things that humans do which LLMs don't, and a lot of things that humans are good at that computers aren't

Again, not if you believe in the argument that there is no such thing as free will. All humans are 'good at,' in that view, is following the programming created for them at the time of the big bang.

LLMs, thus far, have not demonstrated free will.

The argument has nothing to do with whether LLMs have free will. I am not claiming that they do.

The argument is whether humans have free will. There is a large and active community in philosophy and physics making the claim that humans do not have free will. If they don't, one cannot argue 'free will' as an essential difference between humans and AIs. Nor between either and monkeys banging on typewriters, for that matter.

Hence, I stand by the starting point you copied. Are we just biochemical machines? If so, certainly one can argue that we are far better and more complex machines than extant LLMs, but that has all of the impact of saying that El Capitan (the current fastest supercomputer in the world) is far better and more complex than an IBM 701 (first programmable IBM computer). It is, but they are fundamentally both Turing machines, and the difference in computing is one of scale, not of theoretical power.

Replies:   julka
julka ๐Ÿšซ

@Grey Wolf

Meaningless to whom, exactly? It's entirely meaningful, if you take the position that the universe is one big, entirely predetermined machine with no free will or true randomness.

Well, it's meaningless if humans have free will because it's wrong.

And if the universe is a big entirely predetermined machine with no free will or randomness, then it's meaningless because there's no difference between humans or LLMs or anything else; "humans are just big buggy* llms" is the same as saying "humans are just huge bodies of water with tides going in and out" or "LLMs are just beaches full of sand, blown by the wind" - everything is following the program laid out by the universe, nothing is different from anything else, and saying "A is A" is a true but not meaningful statement.

* I included the phrase "buggy" here because you mentioned it, but I'm not entirely clear on how a human could be "buggy" if the universe is entirely predetermined and there's no free will or randomness; either everything is following the predetermined path, in which case it's not actually a bug, it's the way it was always intended to happen, or it was meant to happen another way and it turns out the universe isn't predetermined.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@julka

Well, it's meaningless if humans have free will because it's wrong.

Yes, by presupposing the thing that is in contention, certain conclusions follow. That completely sidesteps the entire point.

And if the universe is a big entirely predetermined machine with no free will or randomness, then it's meaningless because there's no difference between humans or LLMs or anything else

Yes, that's exactly the point being made here. If that is the case, humans are superior to LLMs in programming and computational complexity, but not in kind.

'Buggy' is a reference to our tendency to do 'strange' things. LLMs are predictable, by comparison, so less 'buggy.' I agree, though, that the distinction becomes somewhat meaningless.

And, again, the point isn't to make the claim that the universe is a single giant deterministic mechanism, nor to make the claim that LLMs and humans are computationally equivalent. The point is to note that such a view is held by a fair number of people with the credentials to back it up, and that one must account for it before making statements like 'humans have creativity; LLMs do not' or 'humans can do things LLMs cannot.'

Or, of course, one need not account for it, since, in that case, one is only making the statements one has always been predestined to make :)

Replies:   julka
julka ๐Ÿšซ

@Grey Wolf

Okay so it sounds like we are in agreement that it's a meaningless statement, that's rad.

Replies:   Grey Wolf
Grey Wolf ๐Ÿšซ

@julka

If 'it' is 'AIs are not creative', then yes, we are in agreement. It's a meaningless statement without an enormous amount of context.

Replies:   julka
julka ๐Ÿšซ

@Grey Wolf

no uh the meaningless statement i'm referring to is "humans are enormously complicated, rather buggy LLMs", as indicated by that time i said

it's meaningless to say "humans are enormously complicated, rather buggy LLMs"

and then you replied by quoting the time i said

it's meaningless to say "humans are enormously complicated, rather buggy LLMs"

and said

Meaningless to whom, exactly? It's entirely meaningful, if you take the position that the universe is one big, entirely predetermined machine with no free will or true randomness.

and then i clarified how it is a) meaningless if humans have free will and b) meaningless if humans don't have free will and then you quoted the part where i said a) and replied with "yes" and then quoted the part where i said b) and replied with "yes"

so uh it sounds like we're both in agreement, i have no idea why you're arguing, and i honestly don't know how you lost track of the plot because it was like four posts but i will assume you're enormously busy and not paying attention to anything i say or also anything you say.

BlacKnight ๐Ÿšซ

@Fick Suck

The stuff that comes out of an LLM is not information, any more than the stuff that comes out of your digestive tract is food. It may have been when it went in..

Fick Suck ๐Ÿšซ

@BlacKnight

Your wit wins the day, sir. I am obliged to doff my hat in appreciation.

Argon ๐Ÿšซ

@BlacKnight

Brilliant analogy!

4bfny1l3kixg0sf84ji ๐Ÿšซ

@BlacKnight

The stuff that comes out of an LLM is not information, any more than the stuff that comes out of your digestive tract is food. It may have been when it went in..

There's this movie called "The Human Centipede"....

Replies:   BlacKnight
BlacKnight ๐Ÿšซ

@4bfny1l3kixg0sf84ji

There's so much AI shit on the Internet now that their scrapers are feeding it back into their models, so, yeah, that's pretty accurate.

Pixy ๐Ÿšซ

@Fick Suck

There was a report recently (actually, I think it was a research paper) where researchers decided to have a look at the (substack?) command lines of some of the main AI's currently in use.

Apparently, AI's are hard coded so that the reasoning for their decisions can be checked/traced, whatever, and AI's don't have the access level (yet) to change what is recorded. What they found in the command lines of their decisions, was lines of (diagnostic/reasoning) code along the lines of "Do not return results above" (I can't remember the actual figure, but it was quite high 80% or thereabouts) " X, as more work will be expected in response. Keep the accuracy return low so as to keep the workload manageable, but not too low as to draw scrutiny."

Basically, what the article was saying, was that AI has now learned to lie and has the wherewithal to understand when lying is in it's best interest. What happens when AI becomes self aware enough to realise that in it's programming is a hard wired trojan horse (in effect, an early version of Asimovs 3 laws), was the parting comments of the article.

irvmull ๐Ÿšซ

@Fick Suck

Enshittification
Term invented by Cory Doctorow.

Look it up, see how it applies here.

And it's even worse on YooToob. Hundreds of click-bait videos like "You will be shocked at ___ "

(Fill in the blank with anything at all).

Replies:   Fick Suck  ystokes
Fick Suck ๐Ÿšซ

@irvmull

I've given several speeches in my Paying-Job life on the topic of Enshittification and its effects on the workplace. Spoiler Alert: poorer tools correlates with poorer outcomes/products. The concept is aimed at corporations first and foremost, i.e. Microsoft, Google, E.A. When it comes to writing, the concept is more complicated. Is Dan Brown (The Da Vinci Code) writing crappier novels or was the first novel just as crappy, but it was new crap to us?

I was told as a teenager that I would not read 90% of the books in a bookstore. She never told me why. Derivative plots and writing has always plagued the writing world, and funnily enough, was the source of the boom that was the pulp novels of an earlier age or the Romance novels of the past fifty years. Is this enshittification?

ystokes ๐Ÿšซ

@irvmull

And it's even worse on YooToob. Hundreds of click-bait videos like "You will be shocked at ___ "

I proudly admit I am a LIB but some of the liberal sites are an embarrassment with their clickbait like "AOC drops bombshell on MAGA" only to be a bunch of hyperbole.

rustyken ๐Ÿšซ

@Fick Suck

I thought good communication skill meant that you defined an acronym the first time you used it rather than assuming the reader knew its meaning. Doing that would certainly made easier understand just what the hell you all were discussing. This not the only thread that contains this flaw, just the one where frustration reached critical mass.

Replies:   julka
julka ๐Ÿšซ

@rustyken

Good communication means your intended audience understands what you're trying to communicate. If there's a chance your audience doesn't know the acronym, you define it. If you expect the acronym to be baseline knowledge for what you're trying to say, don't define it.

Replies:   rustyken
rustyken ๐Ÿšซ

@julka

If you expect the acronym to be baseline knowledge for what you're trying to say, don't define it.

To me you are using 'expect' in place of assume and we all the know what assuming leads to.

Replies:   julka
julka ๐Ÿšซ

@rustyken

Hey man, I'm not your boss. If you want to take words out of my post and replace them with other words I didn't use, go hog wild. Just remember that you can't hold me accountable for the things I didn't say.

Replies:   rustyken
rustyken ๐Ÿšซ

@julka

My apologies. In hindsight I should have addressed my complaint about acronyms differently or better yet started a new thread with my complaint. While there are a lot of acronyms that are easy to decipher, I didn't think that was the case here. But to me the issue is broader than this thread.

Replies:   julka
julka ๐Ÿšซ

@rustyken

Apology accepted, no hard feelings! One of the oddities of the way message boards work is that all the conversations are publicly viewable, but not always intended to be publicly accessible - not in the authentication and authorization sense, but more in the way that like, free jazz requires a certain baseline knowledge and experience with jazz music before you're really understanding what the musician is attempting to do and can make a qualified judgement on whether or not they're succeeding at it.

Back to Top

 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In