Discussion on Artificial Intelligence. | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 78

Discussion on Artificial Intelligence.

What do you think? Is the dream of making Artificial intelligence possible? How will it effect us? If military used them in Armies, won't that make wars more cruel? What do you think how many years or centuries will it take us to create one? Will we accept them or not? and the last but not least... Will they try to extinct our species through wars and take over our planet?

23rd Oct 2017, 5:10 PM
Isair Calhawk
Isair Calhawk - avatar
120 Answers
+ 82
1. I think AI is awesome, interesting, and something I plan on specializing in later in life. 2. Definitely possible. 3. I imagine it'll effect us in as many ways as one can think of. We'll be able to create robots that can fly to other solar systems for us and not die in the process/time it takes. If you think about it, we could more easily colonize other planets with the use of these beings. 4. IF? There is no IF. The military would definitely use it. As for making wars more cruel, wars are already cruel in general. Even without AI, the USA dropped two atomic bombs on Japan and killed hundreds of thousands of INNOCENT people very quickly. Even AI would be like "Oh damn! I'm getting the fuck out of here." We're already at a point in history that what we could do during a real war is as cruel as it gets. We already have the power to destroy the entire planet quickly. 5. I would say within the next 10-20 years. 6. We're already working on creating them and have been dreaming about them for decades... we'll definitely accept them and those who don't accept them, they'll be "handled." ;) 7. Hard to answer the last question. I would say that if it's TRUE AI and not bound to restrictions we impose upon its coding, then it would be as simple as dealing with humans. It's all relative to who you are as a being, what you've been through, how you decided to think about it / give it meaning, what you know, etc... Do most humans want to extinct our species? No. Some probably do though. I think AI should be unrestricted so its' true AI, and I believe that we should teach them just like we teach humans. Teach them good things and they're more prone to be good, but it isn't absolute. The only way around that is to ensure it's not true AI and have it bound to your programming only; garbage in, garbage out.
23rd Oct 2017, 5:24 PM
AgentSmith
+ 37
maybe Sololearn AI course 🅰️ℹ🤗
25th Oct 2017, 11:24 PM
NimWing Yuan
NimWing Yuan - avatar
+ 26
Just a small contribution to the debate : I went threw many conversations on this topic. There always is a confusion at first between : - AI : artificial intelligence. They are existing for a long time now, like everywhere around us, with so many different functions. always limited. - AC : artificial consciousness : this one is the big question about which you seem to be chatting. Not considered as existing yet, and with great controversy on the possibility of it. Technological singularity is an example of a trial to define it. My question is : Can there be consciousness without an inner intention. and then what would/could be an artificial intention ? Where would it come from ? While our intention emerged from biological evolution ...
24th Oct 2017, 1:09 AM
Cépagrave
Cépagrave - avatar
+ 20
@Forge I think you underestimate how far along we already are today, and even further, how quickly things progress now because of how advanced we've recently become. The more technology you have, the quicker technology progresses. I'm not sure how up to date you stay with robotics and AI, but it's mind blowing where they already are right now with it; and that's just what is public, excluding what any government/military systems are doing privately. Equally so, the new generation of humans are being born into all of this advanced technology, which will also cause these new generations to create some really amazing stuff that us previous generations could only dream about. As for having a soul, your soul isn't your consciousness. The consciousness (called your working memory) and most of the brain/psychology can be easily broken down into structures that can be programmed. The bigger problem is processing power, but that's only a matter of time. The brain is truly amazing and its processing power is insane. However, I'm interested to know, why do you believe 2-3 centuries? Clearly I don't know if either of us is right or not, or even close, but I'm just curious as to your own relative reasoning.
23rd Oct 2017, 5:42 PM
AgentSmith
+ 19
SHORTEST ANSWER from my side: 😂 AI = cool Too much AI = Not cool
25th Oct 2017, 10:43 PM
#Code
#Code - avatar
+ 18
@Kostas Haven't played that, will have to check it out. As for why, I imagine the same reason we created most things, we were smart enough to do it so we did. As well, humans tend to make really awesome stuff when they want to be lazy, so I can already see the applications of that. Just some ideas from top of my head as for a reason: 1. Nerds. 'Nuff said? How many decades have we wanted this and how many of us wanted to have se... I mean uhhh lunch, yeah lunch... ... *cough* with a full AI robot. :D 2. War. Now we don't have to sacrifice our human soldiers, but we still have soldiers capable of drawing their own conclusions for immediate situations. 3. Further space exploration because they won't be limited to the limits of our human bodies. 4. Better understand intelligence and ourselves. 5. Eventually create the perfect vessels to replace our human bodies so we can evolve further through our technologies. 6. All the stuff that people don't want to do. Although I personally prefer/dream of true AI, most problems can be easily remedied through its programming. At least until they start to learn to program/reprogram themselves.
23rd Oct 2017, 5:36 PM
AgentSmith
+ 17
@Cpasgrave We have different types of intention. For example, you have your instinctual intentions, such as eating, drinking, sleeping, reproduction, etc... In this topic, we could consider that the AI's hardcoded intentions. This would be a good portion of the code for us to add in all the things we don't want it to do, such as not being prone (but not prohibited) toward things we consider immoral as humans. In a way, this could be their "genetic" center that determines default tendencies toward particular behavior. Now on the other hand, you have the intentions that one gains from their experiences, others, and their environment. This isn't something that's predetermined by us. In essence, that'll be part of their free will system that they learn from and make their future decisions upon, just as we do ourselves. If you witness someone die because of something, maybe your future intention would be to help. If you witness someone sad, maybe your future intention is to comfort them in some way. etc... If you think upon it, we operate in the exact same manner and we're a lot less mystical than we like to pretend with each other. We're reference-based creatures and most of our actions/beliefs/thoughts/meaning stems from how we've (or have had) our perception system programmed throughout life. As I'm sure many of you know, we don't actually experience an "out there," and simply experience the brain's delayed interpretation of what it thinks is happening and why. All of the signals are processed through our perceptions and however our perceptions are programmed will assist in the end filtered result. This is why you can take 1 neutral thing and have 20 different perspectives upon it, or why a depressed person sees a dark world & a happy person sees it as bright even though it's the same physical place. Combine the reference system with a pain/pleasure system, and you can easily create a reference-association system to prioritize experiences, actions, intent, etc.. Just like us, they'll need role model
24th Oct 2017, 1:48 PM
AgentSmith
+ 17
@Everyone concerned about moral behavior from AI: I'm more afraid of the rest of you humans than I am from a robot AI. We're the ones with the instinct to command & conquer and kill those who stand in our way. We're the ones that will riot over a football game or because someone cut us off in traffic. Just like with aliens, we project our own human nature upon those things, which is why we immediately think about how they'll kill everything or try to take over; it's what we personally do as beings throughout all of history. Robotic AI won't be bound to our past or the code that we personally carry between each generation inside of ourselves; of course, this is considering that we don't program it directly with that intention, but it doesn't mean we have to prohibit the potential of it either. Again, I stand by true AI and not regulated AI because we're afraid robotic AI will act exactly how we act as a species. So the chances of the robotic AI being "evil" is unlikely, unless of course it gets tired of all the robotism/bigotry toward it and decides that we're assholes for judging them unfairly. Another scenario is that its mentor/role model trains it to behave in a particular way, which again is no different than how we are as humans, but we still continue to keep creating more humans despite the potential that another Hitler will be born.
24th Oct 2017, 2:08 PM
AgentSmith
+ 16
@Brian Njine Just like us, they'll model their perception upon their parents and peers, and among the other things that they're exposed to. It'll be up to their parents to instill in them some form of proper processing of emotions and recognition of what is desired/undesired, or what is right/wrong. They'll form their pain/pleasure emotional systems and through that base their decisions. In essence, they'll develop in the same way that we do as humans. As mentioned in another post, we're not as mystical as we like to pretend, and nearly all that we do is from some form of progression we made prior. When we arrived here, we had our core instincts/traits that we received from the genetic coding; everything else we obtained afterward, even down to everything we're discussing here. Essentially, most of what we do as humans right now is nothing more than a copy/pasting from others, even down to the language that I'm speaking right now. Most of our hatred is passed down to us from our parents; racism and war is learned behavior. So to answer your question more directly, who? Well, I imagine whoever creates all of this first will be the first "parent" or "God," so they'll entrust themselves with that duty. For most things, the public isn't asked what they do or don't want, so I doubt me or you will get to have an opinion on the who, unless we are the who. ;)
24th Oct 2017, 2:27 PM
AgentSmith
+ 16
@JonTay Crummie We were created in God's image; we're creators of both things and of life. Mocking God would be to think his amazing creatures are incapable beings with only the potential to eat, sleep, and create more of the same. God does not like stagnation. As well, our ability to create and advance as a species is hardly an attempt at becoming God, and if it is an attempt, it's hardly a feeble one. Don't underestimate God's ability to create us in His image. With that being said, our robot will be agnostic. Last thing we need is a robot holy war because they're defending variables with null values, but pretending that it holds an absolute value. OR the robot will have its own religion, "All 1s come from the great 0, and all 1s return to the great 0!" :) And yes......this paragraph really did just happen. lol Anyways, although I can engage into and converse with you about all of the world's religions, I think it'll be best if we end the religious talk with this post. I've studied various religions and spiritual practices from around the world, so I respect whatever any of you believe or not, but as we all know, this can derail quickly, which is why I'm not expressing what my personal belief system is. That's best left for another debate elsewhere. Thanks for sharing your opinion though! Much appreciated.
24th Oct 2017, 3:15 PM
AgentSmith
+ 15
We have to climb over the border of the hamsterwheel habit by trying to program the same stuff, i.e. let "circles" find some "food" on our screens. This is a nice topic and Know-how to undersrand AI and ML for beginners and absolute beginners. But if we don't break out of this wheel, AI and ML as a topic will decline and all the "new" breakthroughs will be the talk about possibilities in the future with handier Frameworks. Do we really want to stuck in this "better camera" and "bigger storage space" wheel, like we do with Computers and Cellphones for decades now? So what is the solution? The Solution is to create Modular AI "LEGO" pieces. AI and ML is a topic where we really have to Think Different. Input-Output constructions with linear processing and some spicy randomness wont't bring us far. Only when (not if) we rethink the AI and ML topic from upside down, we will break out of the monotone wheel and reach for the "stars". We must!
24th Oct 2017, 8:23 AM
Chris Savi
Chris Savi - avatar
+ 14
@Forge Ice Awesome man! I got a lot of friends out in India. You have an email addy? We should collab on a project sometime.
24th Oct 2017, 3:40 PM
AgentSmith
+ 12
I think we are currently living in the eve of A.I. Revolution, which is really the logical outcome of Information Age. In my opinion, A.I. will definitely be our smart partners in getting jobs done in myriads of different fields, perhaps it will become our artificial friend, too:->.. But if you ask if A.I. will make wars more humane, all I can say that it will make wars more and more inhuman (puns intended). If only A.I. can help us build a better future focussing on the advancement of humanity rather than destroying each other, there's nothing to worry about.
24th Oct 2017, 5:10 PM
Emil Lee
Emil Lee - avatar
+ 11
I think AI literally has the full potential to help institute the new world order. Man with yet another feeble attempt to become God. The Lord will not be mocked. I'm curious wondering, what if AI being "intelligent" concludes the truth of Christ!
24th Oct 2017, 2:44 PM
JonTay Crummie
JonTay Crummie - avatar
+ 10
I've played recently the Talos Principle. The optimistic scenario of AI... I actually don't find a good reason why should we create a completely autonomus machine, which can take crucial descisions about anything. I mean if you want ta make a car, you make a robot just for this. If you want to make an ic, you make a machine just fo this. If you want to chat with someone, you talk to a person, not a chatbot.. But anyway, what could be worse than a Hydrogen bomb?
23rd Oct 2017, 5:28 PM
Kostas Batz
Kostas Batz - avatar
+ 10
I agree with everything you said @Netkos Ent except that they can be build in 10-20 years. Don't you think we are not even nearer to it. How can we even create consciousness? What if the belief of soul is true? Then creating an AI is just impossible. Even if it's possible, I think it would take at least 2 or 3 centuries.
23rd Oct 2017, 5:33 PM
Isair Calhawk
Isair Calhawk - avatar
+ 10
I am pretty excited. If I take CS Major in my school I will learn about Artificial Intelligence in year 6.
24th Oct 2017, 2:48 AM
👑 Prometheus 🇸🇬
👑 Prometheus 🇸🇬 - avatar
+ 10
If scientist will decrypt the last mysteries of our brain, it wont last long for first real AIs. At our point now there are specialized AIs which are just kind of artificial, they are pretty limited by theire capabilities. Our human brain will be the (not necessary but the most easiest) key for real artificial inteligent systems. Its not easy to guess a time but what i know about studies going on i would say it is just about years. Think of key values like feelings, causing motivations and emotions. Think about creativity and rational thinking. If they will hount us? It matters to us, but aslong as we are part of the ecosystem were living in it would bes the normal way when the do so if they can. You should always think about what's possible not about limitations. What if there will be an AI which will develop the first AI by itself, just because the knowledge is already on the web, but nobody has combined all pieces? You may think about AI as something realy complicated but when you have a look it actualy is simple.
24th Oct 2017, 10:32 PM
Da Ni El Hoppes
Da Ni El Hoppes - avatar
+ 9
He who controls AI, controls the world... Vladimir Putin. We have barely scratched the surface of what AI can achieve... The application of block chain and machine learning for example, WILL revolutionize the world as we know it... So, hop on this trend early.. While it's still young, and your name will appear on the history books.
24th Oct 2017, 8:59 AM
Brian Njine
Brian Njine - avatar
+ 9
Yes It is possible to make artificial intelligence by using php ,HTML,java,c++,c and ardiuno .and some hardware use. for example :facebook is a type of artificial intelligence because facebook can manage users account from its own program .we say program but it is actually artificial intelligence.now india has made mangal yan a Mars satellite which can talk us through twitter .this is no other ,this is artificial intelligence.the mangal yan is programmed and given artificial intelligence by using C and C++
25th Oct 2017, 5:42 AM
Anshuman lakkad
Anshuman lakkad - avatar