Sarah Silverman really did a hit job on AI systems on the 11-8 episode of the Daily Show. I feel like it is largely fueled by ignorance of how the mathematics in these systems actually works. These systems do not make “copies/are copycats” like Sarah ignorantly espoused, they train on data and project that into an n-dimensional space to generate something new from its experience, not much different than humans do. They do not memorize the original data and make copies at all.

Most of are you are familiar with 2 dimensions like a piece of paper or 3 dimensions like a cube, machine learning systems learn in n-dimensional space where n can be any number; most of these systems the space is 10,000-1 million dimensions. These systems aren’t simply making a simple copy but extracting the most salient features in text, images, etc into a n-dimensional space to create a new product based on all of its experiences.

This is really no different than how humans create art, they observe lots of styles, learn from it, and try to create new things based on their knowledge based on the many dimensions learned by their observation and experience. Why is it wrong for a computer to learn from art posted online, but it is no issue for a human to learn from art posted online? Do humans have to cite every single painting they ever saw when creating something new? This seems like a double standard honestly.

Also creating AI models is in itself an expression of the artistic process. These systems are created by humans, not machines; they are an extension of human mathematical and scientific creativity. Fire was made by hand for 1000s of years; is it not an extension of human creativity to create a lighter such that you can create a flame at any time; likewise generating AI systems to create art is in itself an extension of human creativity and ingenuity in the same way that creating a lighter to make fire making easier is.

I liked Sarah Silverman for the rest of her segments, but she really showed her ignorance and lack of any technical understanding from a scientific/mathematics perspective on the development of AI.

  • PMMEBITCOINPLZ@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I know the Daily Show is not a real news program, but Sarah Silverman is potentially the least objective person in the world to choose for a segment about AI. She’s currently in the middle of a major lawsuit about AI and is on the record she thinks training AI on her book was theft.

    • taco1520@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It is theft. How is it any different than musicians having to pay royalties for sampling an older track in their songs? Even stripped down versions that isolate a specific component and are used in a different style of music have to pay.

      • PMMEBITCOINPLZ@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Maybe. I’m just saying she’s literally the least neutral person, in the world, on that question.

  • DanceSensitive@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Equating machine learning with our biologically evolved CNS is either a delusion of grandeur or a very bad faith argument.

    • MatsThyWit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Equating machine learning with our biologically evolved CNS is either a delusion of grandeur or a very bad faith argument.

      It’s people with no artistic ability who want to be able to call themselves artists because they typed a concept/idea into a computer and the computer drew it for them.

      • w0m@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        That’s unfair I think. I can’t draw worth a damn. I love photography. Does that mean photography is cheating because it’s pointless/click?

        It’s simply a new (and vastly different) tool. You can argue morality on training sets, but to say 'just typing to create ’ is incredibly reductive.

    • vvilbo@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I mean it’s literally like, input, convolutional layers do something, output. Of course there is a lot more work to make the output “good” like in the case of chat gpt but realistically the building of even a decent model can be done by anyone that watched a couple of videos online and has some knowledge of computer programming . Figuring out how a model reached a specific conclusion is borderline impossible due to the complexity of all of the hidden layers. If devs could tell you how their model came to that conclusion and what was the models sources/“inspiration” as op may call it, I think at least I would feel a bit better about it. I really hate the “people learn from other people’s work why shouldn’t ai” bullshit equivalency it really is fucking hot garbage. There are artists, musicians, writers that get found out all the time for using someone else’s work and need to give credit and possibly monetary recompense all the time, but when ai literally scrapes copy written sources without permissions it’s suddenly the same as me paying to read a book and writing some derivative garbage cause I’m an idiot.

    • Tight-Expression-506@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      True.

      My company who is in ai space has been working on a chatbot ai for 10 years and still has years to go before it is fully automated. It works fine but it struggles with complex issues.

      If ask chatgpt to do complex coding, it struggles mightily.

      People under 25 are ones who will have to deal with ai mess. We are still 10 to 15 years away before it starts killing jobs at major companies.

      Yes, ai and microrobots will kill off skilled labor from electricians, plumbers, cooks, and housing labor in this time frame, too.

  • yeswab@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    However, you absolutely cannot dispute that a for-profit entity profited by the act of their AI product learning from her book. The book may be publicly accessible, but it is not being given away free by its publisher, and whichever company’s AI product learned from it, derived profit from using her content.

    That’s not even technology, it’s just logic and ethics.

  • ATLCoyote@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    She’s only got a few mins to make a comedy segment about it and there are indeed legit questions about whether it amounts to plagiarism.

    Yes, there is a counter-argument, as you’ve just illustrated with the learning process for humans where we don’t get permission or pay a royalty for everything that influences our creative designs or expressions. Ultimately, whether it’s different when a machine does it is a matter for the courts to decide.

    At the very least, we’ve seen many instances of where NIL has been used without permission by AI tools and I assume there will be limitations imposed or royalties owed in those scenarios. So, does Sarah’s book constitute “name, image, and likeness” particularly considering the autobiographical content?

    Not sure, but I don’t think her portrayal of the issue demonstrated ignorance. It just showed her point of view given that she’s involved in one of these lawsuits.

  • Q_van_der_Stuff@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    She probably didnt write most of that segment. It was put together in a writers room and I imagine she helped edit the final version. Any other guest host would have presented a nearly identical segment.

    • MiskatonicAcademia@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Sarah is suing ChatGPT, so she has an ax to grind. As host, she would’ve at least read and approved the script for use.

          • ElectricJetDonkey@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Well I don’t know enough to know if she’s misinformed or not, so I’m talking about the fact that she has a stake in the argument/discussion about AI.

            • MiskatonicAcademia@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              The entire topic that OP wrote about and what we’re talking about is how Sarah’s segment on AI technology was misinformed and biased because of an ongoing conflict of interest. It is misinformation which is dangerous. It is only three paragraphs long. You can read it if you want for context.

      • Malachorn@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        That blackface episode was kinda fantastic though?

        It was also 2007.

        And let’s be clear: the show made it very clear how awful her character was for wearing blackface.

        She’s a shock comedian. Pushes boundaries. Of course she’s going to cover almost anything distasteful - that’s the schtick. When the episode aired… no one cared. It was pretty acceptable, as it was actually ANTI-blackface and racism in general.

        Today? We realize memes are all that matter. No one is watching these episodes and appreciating context… so you just don’t do it because most will just see a single offensive image on the internet.

        • Justinwc@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Basically the same thing as Robert Downey Jr’s character in Tropic Thunder. It’s making fun of the person putting on blackface, not the blackface stereotype itself.

    • MatsThyWit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      My guy, 99% of us don’t understand the technical aspects of AI.

      Most of all the people who are so adamantly in favor of it and against all criticism of it seem to be the ones that are most confused by what it actually is. But they’ll argue their point of view with such strong conviction that they’ve actually convinced themselves they aren’t just bullshitting people.

      • prosthetic_foreheads@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Ah yes, the people who hate AI know more about it than the people that love and use AI, that makes a ton of sense.

        And you want to talk about blind conviction? Come on–look at some of the comments in this thread. People treat creativity as if it’s sacred, and if you treat something as sacred all logic goes right out the window.

        Seriously, just read back that comment and look at what the side you support is saying. You’re projecting like an IMAX.

  • Ok-Manager7196@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    100% this. I’m so sick of all the creatives (even those who should know better like Scott Galloway/Kara Swisher) implying that AI “copies” anymore than we humans do. Just bc AI is faster at learning and has broader skills doesn’t make the artistic process any different than how we learn and then create. I worry SCOTUS has a long history of “not getting it” as well and we’ll end up with some new standard for AI creation that doesn’t exist for humans.

    If you don’t like art that can be “done in the style of Sarah Silverman”, then define how it can be commercially used if “too close”, just as we do for human copying.

    • MiseryGyro@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      “Just because bc AI is faster at learning and has broader skills doesn’t make the artistic process any different than how we learn and then create”

      No AI will ever cry in an alleyway because the audience the night before loved them while tonight’s hated them. No AI will have it’s heart broken and bury its parents. No computer will have to look down the face of a terminal disease and find the will to keep creating.

      You mindless chuds who consume art with no respect for the pain and struggle of what it takes to create art from the intangible.

    • taco1520@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      AI doesn’t create, it outputs. You can test this easily, ask an artist to draw a picture of their happiest or saddest memory, then ask the AI to do the same.