|
Post by surnshurn on Aug 18, 2017 13:32:33 GMT -5
and it's as scary as we all imagined it would be. here is professional dota2 player "dendi" playing a 1v1 exhibition match versus a self-taught bot
|
|
|
Post by surnshurn on Aug 21, 2017 10:18:47 GMT -5
after doing a bit of follow-up, it turns out the bot lost a few of its 1,000 games here in seattle. its losses were attributed to tactics including using boots and windlace (both increase movement speed, this probably wouldn't have worked given another day or two of learning) and avoiding the center field to kite enemy creeps until the bot lost its tower to attrition (this is a 'foresight' tactic, and shows an obvious weakness of the learning algorithm, which favors tactics that achieve incentives as quickly as possible)
unfortunately (imo) the creators have decided to muck around with the code to correct these play deficiencies, meaning that this bot build is no longer purely self-taught. still interesting stuff, and i'm looking forward to the 5v5 exhibition promised for next year. it's a far cry from the TAS runs that have shown up in the past decade or so.
nobody else on hg101 finds this stuff really interesting?
|
|
|
Post by Owlman on Aug 21, 2017 10:39:14 GMT -5
So this is a true, learning AI? Not just a really fancy script?
|
|
|
Post by surnshurn on Aug 21, 2017 10:56:10 GMT -5
well it's self-taught in that it's given objectives in the form of incentives (earning an incentive is better than not in a given instance) then it runs countless times until it starts to gain incentives and builds from there. a simple demonstration of this process in action:
this is opposed to being given a set of instructions from a programmer (do this in this case, do this other thing in this other case)
|
|
|
Post by Owlman on Aug 22, 2017 7:40:55 GMT -5
Ah, that's pretty sweet. And a little scary.
|
|
|
Post by llj on Aug 22, 2017 11:29:16 GMT -5
Elon Musk has been railing against the dangers of AI for the past while.
I still think a truly self thinking AI is a ways off though. Right now you still largely have to set certain goals for it to achieve to guide its thinking process.
|
|
|
Post by surnshurn on Aug 24, 2017 13:46:27 GMT -5
That's pretty ironic given that Musk's company, Tesla, tests and employs automated control systems in its cars. i personally don't see AI becoming as dangerous as people are really afraid it will. besides, it seems like every week has something new to be afraid of anyway, but that's a different topic.
I find it personally interesting to watch the paths that development take, where/what/how safety measures are involved (this is a discussion that's been going on since before most of the people alive today were even born, after all).
quick edit: the company i work for recently released an automated service that does what i do, faster and cheaper. of course this means i'll take just that little bit more of interest in this science and the direction it's going.
|
|
|
Post by GamerL on Aug 24, 2017 17:26:32 GMT -5
I don't really see why AI is something to be afraid of, is a self conscious human evil just because? No, so why should a self conscious artificial intelligence be something inherently bad or evil?
|
|
|
Post by backgroundnoise on Aug 24, 2017 17:34:37 GMT -5
I don't really see why AI is something to be afraid of, is a self conscious human evil just because? No, so why should a self conscious artificial intelligence be something inherently bad or evil? Because humanity does not like fast and efficient machines whose actions cannot be predicted (if various science fiction stories are to be believed).
|
|
|
Post by surnshurn on Aug 24, 2017 21:34:18 GMT -5
the danger is that a true sentient ai can become so intelligent that it's literally smarter than the entirety of the human race, and it can do so without our knowledge, and on a growth curve that's practically unimaginable, so that's dangerous. also there's the danger of the way that a machine thinks, being single minded and arranging its priorities in sometimes unpredictable ways, ways that no (sane) human ever would. one of the more absurd examples of that is a robot that is designed to collect stamps, and its overriding priority to collect more stamps causes it to break down all organic matter in the known universe in order to create more and more stamps - that is a very extreme argument of course, but one that has been used in serious discussion.
|
|
|
Post by llj on Aug 24, 2017 21:40:10 GMT -5
We're afraid that robots will eventually figure out that humanity indeed does suck and is bad for the planet overall, and will then take steps to remedy that problem.
There's also robot rights. Forget equal rights, what if they decide they are in fact superior and revolt against the idea of being tools for us?
Anyway, these questions are so fascinating to ponder that I kind of want to be around if/when it happens.
|
|
|
Post by surnshurn on Aug 25, 2017 4:55:49 GMT -5
machines don't think that way though. they're not selfish or petty in the least (unless programmed to be). to a machine, a human is no more and no less than a machine, they are both simply values in an equation. this is like saying: velocity sucks, mass is better, when calculating force. unless it is. then you can get an absurd and potentially hilarious example of a high force application like in this illustrated case: imgur.com/a/AOh6y edit: got the equation wrong in my analogy. it's obviously v = f * m , as opposed to f = v * m . Attachments:
|
|
|
Post by surnshurn on Aug 25, 2017 9:24:18 GMT -5
after doing some casual reading:
the dota 2 bot in the video below was actually not an artificial intelligence (which is a term most people use for 'gi' or 'general artificial intelligence') but a kind of advanced script similar to the LUA scripts used by TAS-ers to find arbitrarily best routes in specific sections of a game. I believe this was used in the AGDQ Megaman 2 TASbot segment (viewable on youtube) to find a glitch that allowed erroneous completion of a stage. what this did was allow the runners to make something like 200 million separate attempts in a 2-week time frame to find the exact set of inputs, or something similar. so it's not new, but having a bot react to a human character in virtual-space is pretty new. there's also an interesting video of a proof-of-concept SSMB bot running in Dolphin emulator that is basically unbeatable. in no way does it even look like or act like a human player though, its movements are absolutely absurd.
|
|
|
Post by Purple Moss on Aug 25, 2017 20:16:17 GMT -5
AI is very interesting alright. And exciting. And possibly dangerous. This video gives a simple overview of AI today. I bet we're going to see this a lot more often in e-sports -- human teams playing exhibition matches against smart AI opponents. One AI bot does have weaknesses and exploits, but a group of 5 might be nigh unstoppable. Coordination and working in mixed teams would also be an interesting challenge.
|
|
|
Post by llj on Aug 25, 2017 21:38:29 GMT -5
machines don't think that way though. they're not selfish or petty in the least (unless programmed to be). to a machine, a human is no more and no less than a machine, they are both simply values in an equation. this is like saying: velocity sucks, mass is better, when calculating force. unless it is. then you can get an absurd and potentially hilarious example of a high force application like in this illustrated case: imgur.com/a/AOh6y edit: got the equation wrong in my analogy. it's obviously v = f * m , as opposed to f = v * m . Well, I was more getting into science fiction rather than the current logic for AI, haha.
|
|