Accelerating ML Research: Meet us at NeurIPS 2019

Unity

Accelerating ML Research: Meet us at NeurIPS 2019

After spending several decades on the margins of AI, reinforcement learning has recently emerged as a powerful framework for developing intelligent systems that can solve complex tasks in real-world environments – from playing games such as Dota and StarCraft to teaching a robot hand to manipulate a Rubik’s Cube. However, one attribute of intelligence that still eludes modern learning systems is generalizability. Until very recently, the majority of reinforcement learning research has involved training and testing algorithms in the same, often deterministic, environment. This has resulted in algorithms that learn policies that typically perform poorly when deployed in environments that differ even slightly from those in which they were trained. Even more importantly, the paradigm of task-specific training results in learning systems that scale poorly to a large number of tasks, even when the tasks are interrelated.

For instance, consider our work on learning to play Snoopy Pop from visual inputs using the Unity ML-Agents Toolkit. A game-playing agent that’s been trained on a specific number of levels may not perform well on a new, previously unseen level. Its performance might also begin to suffer if game mechanics are modified. This is problematic since games have become live services with ever-evolving content (e.g., new or changing levels, challenges, and missions). A game-playing agent would continuously need to be retrained to overcome this limitation, which could be time-consuming or prohibitively expensive. We are committed to developing learning systems that can easily generalize and adapt to new tasks or changing game mechanics to overcome this constraint. With the Unity ML-Agents Toolkit, we took the first step to addressing this challenge by providing the capability to train agents on distributions of tasksRead more

Sniper systems is Unity Reseller in Bangalore

Back To Top