Sarah Silverman really did a hit job on AI systems on the 11-8 episode of the Daily Show. I feel like it is largely fueled by ignorance of how the mathematics in these systems actually works. These systems do not make “copies/are copycats” like Sarah ignorantly espoused, they train on data and project that into an n-dimensional space to generate something new from its experience, not much different than humans do. They do not memorize the original data and make copies at all.
Most of are you are familiar with 2 dimensions like a piece of paper or 3 dimensions like a cube, machine learning systems learn in n-dimensional space where n can be any number; most of these systems the space is 10,000-1 million dimensions. These systems aren’t simply making a simple copy but extracting the most salient features in text, images, etc into a n-dimensional space to create a new product based on all of its experiences.
This is really no different than how humans create art, they observe lots of styles, learn from it, and try to create new things based on their knowledge based on the many dimensions learned by their observation and experience. Why is it wrong for a computer to learn from art posted online, but it is no issue for a human to learn from art posted online? Do humans have to cite every single painting they ever saw when creating something new? This seems like a double standard honestly.
Also creating AI models is in itself an expression of the artistic process. These systems are created by humans, not machines; they are an extension of human mathematical and scientific creativity. Fire was made by hand for 1000s of years; is it not an extension of human creativity to create a lighter such that you can create a flame at any time; likewise generating AI systems to create art is in itself an extension of human creativity and ingenuity in the same way that creating a lighter to make fire making easier is.
I liked Sarah Silverman for the rest of her segments, but she really showed her ignorance and lack of any technical understanding from a scientific/mathematics perspective on the development of AI.
True.
My company who is in ai space has been working on a chatbot ai for 10 years and still has years to go before it is fully automated. It works fine but it struggles with complex issues.
If ask chatgpt to do complex coding, it struggles mightily.
People under 25 are ones who will have to deal with ai mess. We are still 10 to 15 years away before it starts killing jobs at major companies.
Yes, ai and microrobots will kill off skilled labor from electricians, plumbers, cooks, and housing labor in this time frame, too.