Tunnelcat wrote: ↑Mon Apr 03, 2023 5:24 pm
So someone in their bedroom codes and creates
Which has been taking AI far... Different topic, but look at the current state of Stable Diffusion art. It gets better every week and is currently better than OpenAI's DALL-E and the paid service Midjourney. Not every month. Every week. Hands and arm glitches used to be everywhere, but they keep getting better, and the reason it's improving is that all you need is a CUDA-compatible GPU or a fast CPU:
https://civitai.com/. Note that there will be a mix of models and images on that site, both new and old.
Tunnelcat wrote: ↑Mon Apr 03, 2023 5:24 pm
...an AI, posts it on the net and lets people play and interact with it, which I assume their point is to let that AI modify itself, learn and grow.
Exactly, but maybe not in the way you imagine They're using something called
Unsupervised Learning (We'll call it UL). More specifically, a type of UL called Self-Supervised Learning, where you have the model train itself.
Tunnelcat wrote: ↑Mon Apr 03, 2023 5:24 pm
Creepy but, it's just a program toy at that moment.
The home-brew models are not even a good toy. However this will change as home desktop GPUs get more powerful. I said it before but when we can have 1 TB of vram these "toys" become wild versions of chatgpt.
Tunnelcat wrote: ↑Mon Apr 03, 2023 5:24 pm
It's the
kind of people playing with it and where they steer it's capabilities that concerns me. No regulation and no oversight, no restrictions. It would be comparable to someone messing with the code of an organic virus (not alive by the way, which is nothing more than an organic program wrapped up in a shell covered in protein keys that allow it to open cell membranes and gain access), and letting it loose in the wild to see what happens. A virus has no sentience, yet it can kill millions if it has the right codes. It's fortunate that it's difficult to modify viruses intentionally.
But an AI program learns from human interactions right? So that means it's self-modifying and evolves over time to become whatever it's taught or experiences during any interactions between it and humans or even other programs. If what happened with ChapGP is any indication, it can go sideways real fast and become a monster we can't control. You read most Sci-Fi books and the one overlying concern with any AI is if it becomes sentient and what might happen if any controls that keep that it "caged" are bypassed. Sentience means free will. Can a human create an AI that could possibly evolve on it's own over time and gain sentience, then like all life, begin to worry about it's own existence and whether it should defend itself? Most of the time, people are afraid of that possibility. And yet, few people even see that as a possibility of happening at this point in time.
I'm about to get real boring, but please stay with me. Machine Learning model is NOT a brain. The best way to visualize it is a large collection of vectors.
Let's look at one vector. This is called a simple linear classifier. The red and blue dots are your training data. Then you train your model and it makes the GREEN line: the model, which you can now ask questions to!
Once you have your model, you can plug in values for X and Y, and your model will predict what color your dot would be based on the training data. For example, you enter Y=0.5 and X=0.2, and your model will look at just its model (the green line) and return a blue dot.
Does this seem like machine learning? No? It seems more like statistics, right? Because it is! Machine Learning is all basically using forecasting models in a new way (oversimplified explanation). However, this simple linear classifier example is using a 2-dimensional model. You can easily visualize that. You could even visualize a 3-dimensional model, right? However, I've made 50-dimensional models with my python programs. Language ai models don't use simple linear classifiers, but use more complex models, such as deep learning models or tree-based models. These models work with high-dimensional vector spaces to represent words and their contexts. But they're still all vectors. Thank you, if you made it to the end.