Sarah Silverman really did a hit job on AI systems on the 11-8 episode of the Daily Show. I feel like it is largely fueled by ignorance of how the mathematics in these systems actually works. These systems do not make “copies/are copycats” like Sarah ignorantly espoused, they train on data and project that into an n-dimensional space to generate something new from its experience, not much different than humans do. They do not memorize the original data and make copies at all.

Most of are you are familiar with 2 dimensions like a piece of paper or 3 dimensions like a cube, machine learning systems learn in n-dimensional space where n can be any number; most of these systems the space is 10,000-1 million dimensions. These systems aren’t simply making a simple copy but extracting the most salient features in text, images, etc into a n-dimensional space to create a new product based on all of its experiences.

This is really no different than how humans create art, they observe lots of styles, learn from it, and try to create new things based on their knowledge based on the many dimensions learned by their observation and experience. Why is it wrong for a computer to learn from art posted online, but it is no issue for a human to learn from art posted online? Do humans have to cite every single painting they ever saw when creating something new? This seems like a double standard honestly.

Also creating AI models is in itself an expression of the artistic process. These systems are created by humans, not machines; they are an extension of human mathematical and scientific creativity. Fire was made by hand for 1000s of years; is it not an extension of human creativity to create a lighter such that you can create a flame at any time; likewise generating AI systems to create art is in itself an extension of human creativity and ingenuity in the same way that creating a lighter to make fire making easier is.

I liked Sarah Silverman for the rest of her segments, but she really showed her ignorance and lack of any technical understanding from a scientific/mathematics perspective on the development of AI.

    • vvilbo@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I mean it’s literally like, input, convolutional layers do something, output. Of course there is a lot more work to make the output “good” like in the case of chat gpt but realistically the building of even a decent model can be done by anyone that watched a couple of videos online and has some knowledge of computer programming . Figuring out how a model reached a specific conclusion is borderline impossible due to the complexity of all of the hidden layers. If devs could tell you how their model came to that conclusion and what was the models sources/“inspiration” as op may call it, I think at least I would feel a bit better about it. I really hate the “people learn from other people’s work why shouldn’t ai” bullshit equivalency it really is fucking hot garbage. There are artists, musicians, writers that get found out all the time for using someone else’s work and need to give credit and possibly monetary recompense all the time, but when ai literally scrapes copy written sources without permissions it’s suddenly the same as me paying to read a book and writing some derivative garbage cause I’m an idiot.

    • Tight-Expression-506@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      True.

      My company who is in ai space has been working on a chatbot ai for 10 years and still has years to go before it is fully automated. It works fine but it struggles with complex issues.

      If ask chatgpt to do complex coding, it struggles mightily.

      People under 25 are ones who will have to deal with ai mess. We are still 10 to 15 years away before it starts killing jobs at major companies.

      Yes, ai and microrobots will kill off skilled labor from electricians, plumbers, cooks, and housing labor in this time frame, too.