A List of Machine Learning Challenges in 2018

Competitions are a great way to excel in machine learning. They offer various advantages in addtion to gaining knowledge and developing your skillset.

The problems and goals are very welll defined. This saves you from the hassle of coming up with a problem, defining the goals rigorously, which are both achievable and non-trivial. You are also provided with data, which in most cases is ready for use. Someone has already done the painstaking work of collecting, preprocessing and organizing data. If it’s a competition on supervised learning, you also get labels for the data.

If you’re a procrastinator, you have deadlines to your rescue. They keep you focused and prevent you from going astray ;)

Competition leaderboards (if the competition has one), push you to do better. They keep things in perspective by giving continuous feedback on how you’re doing relative to others. You struggle to find better solutions, try to surpass yourself, and in the process keep growing.

Finally, the rewards. They come in various forms. Monetary rewards are one. The satisfaction of solving a challenging problems and growing is another. But the main motivation for writing this post is the third kind of reward. If you’re a top performer in a competition organized under a conference, you get a chance to publish your results.

I was looking for a curated list of such competitions but couldn’t find any. So, decided to make one. The table below summarizes all the competitons I could find. They have been ordered according to their deadlines. I plan on updating the list on a regular basis. As more conferences release information about the competitions on their website, I’ll add them to the list.

If you know of any competition that is not on the list, please let me know in the comments or feel free to send a pull request.

Name Conference Starts Ends Website Sub-
Challenges
Mobile Microrobotics Challenge ICRA 15th December, ‘17 16th February Link 03
Disguised Faces Workshop Challenge CVPR 20th January 20th February Link -
New Trends in Image Restoration and Enhancement (NTIRE) Challenge CVPR 10th January 27th February Link 03
Interspeech Computational Paralinguistics ChallengE (ComParE) Interspeech - 16th March Link -
Nvidia AI City Challenge CVPR 10th December, ‘17 31st March Link -
UG2 Prize Challenge CVPR 15th January 2nd April Link 02
DJI RoboMaster AI Challenge ICRA 1st January 10th April Link -
Challenge on Learned Image Compression (CLIC) CVPR 24th December, ‘17 22nd April Link -
Large-Scale Landmark Recognition CVPR 1st January 1st May Link -
Robust Vision Challenge CVPR 1st February 15th May Link 06
ICMI 2018 EAT ICMI 4th April 29th May Link 03
KDD Cup KDD 15th March 31st May Link -
The Look Into Person (LIP) Challenge CVPR - 4th June Link 05
ActivityNet Large-Scale Activity Recognition Challenge CVPR 7th December, ‘17 8th June Link 07
Low-Power Image Recognition Challenge CVPR - 18 June (Onsite) Link -
DAVIS Challenge on Video Object Segmentation CVPR 1st April 30th June Link -
Hearthstone AI Competition CIG - 15th July Link 02
AI for Prosthetics NIPS 1st June 30th Sep Link -
ConvAI2 NIPS 21st Mar 10th Oct Link -
Adversarial Vision Challenge NIPS 2nd July 10th Oct Link 03
TrackML: Particle Tracking Challenge NIPS 1st July 20th Oct Link -
AutoML for Lifelong Machine Learning NIPS 23rd July 6th Nov Link -
Pommerman NIPS 1st June 26th Nov Link -
InclusiveImages NIPS 4th Sep 7th Dec Link -
The AI Driving Olympics NIPS 1st Oct 7th Dec Link -
EmoContext ACL/NAACL 21st Aug 10th Jan, 2019 Link -
Microsoft AI Challenge India - 1st Oct 10th Jan, 2019 Link -

Disguised Faces Workshop Challenge

With recent advancements in deep learning, the capabilities of automatic face recognition have been significantly increased. However, face recognition in an unconstrained environment with non-cooperative users is still a research challenge, pertinent for users such as law enforcement agencies. While several covariates such as pose, expression, illumination, aging, and low resolution have received significant attention, “disguise” is still considered an arduous covariate of face recognition.

Challenge Website | Back

NTIRE 2018 challenge on image super-resolution In order to gauge the current state-of-the-art in (example-based) single-image super-resolution under realistic conditions, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference.

The challenge has 3 tracks:

Track 1: classic bicubic uses the bicubic downscaling (Matlab imresize), the most common setting from the recent single-image super-resolution literature. Track 2: realistic mild adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images. Track 3: realistic difficult adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images. Track 4: realistic wild conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high images. The degradation operators are the same within an image space but DIFFERENT from one image to another. This setting is the closest to real “wild” conditions.

NTIRE 2018 challenge on image dehazing In order to gauge the current state-of-the-art in image dehazing for real haze as well as synthesized haze, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. A novel dataset of real and synthesized hazy images with ground truth will be introduced with the challenge. It is the first image dehazing online challenge.

The challenge has 3 tracks:

Track 1: realistic haze uses synthesized hazy images, a common setting from the recent image dehazing literature. Track 2: real haze with ground truth Track 3: real haze with color reference

NTIRE 2018 challenge on spectral reconstruction from RGB images In order to gauge the current state-of-the-art in spectral reconstruction from RGB images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. The largest dataset to date will be introduced with the challenge. It is the first spectral reconstruction from RGB images online challenge.

The challenge has 2 tracks:

Track 1: “Clean” recovering hyperspectral data from uncompressed 8-bit RGB images created by applying a know response function to ground truth hyperspectral information. Track 2: “Real World” recovering hyperspectral data from jpg-compressed 8-bit RGB images created by applying an unknown response function to ground truth hyperspectral information.

Challenge Website | Back

UG2 Prize Challenge

What is the current state-of-the art for image restoration and enhancement applied to images acquired under less than ideal circumstances?

Can the application of enhancement algorithms as a pre-processing step improve image interpretability for manual analysis or automatic visual recognition to classify scene content?

The UG2 Challenge seeks to answer these important questions for general applications related to computational photography and scene understanding. As a well-defined case study, the challenge aims to advance the analysis of images collected by small UAVs by improving image restoration and enhancement algorithm performance using the UG2 Dataset.

Challenge Website | Back

Challenge on Learned Image Compression (CLIC)

Recent advances in machine learning have led to an increased interest in applying neural networks to the problem of compression. We propose hosting an image-compression challenge which specifically targets methods which have been traditionally overlooked, with a focus on neural networks (but also welcomes traditional approaches). Such methods typically consist of an encoder subsystem, taking images and producing representations which are more easily compressed than the pixel representation (e.g., it could be a stack of convolutions, producing an integer feature map), which is then followed by an arithmetic coder. The arithmetic coder uses a probabilistic model of integer codes in order to generate a compressed bit stream. The compressed bit stream makes up the file to be stored or transmitted. In order to decompress this bit stream, two additional steps are needed: first, an arithmetic decoder, which has a shared probability model with the encoder. This reconstructs (losslessly) the integers produced by the encoder. The last step consists of another decoder producing a reconstruction of the original image.

Challenge Website | Back

Large-Scale Landmark Recognition

This workshop is to foster research on image retrieval and landmark recognition by introducing a novel large-scale dataset, together with evaluation protocols. More details will be available soon.

Challenge Website | Back

Robust Vision Challenge

The increasing availability of large annotated datasets such as Middlebury, PASCAL VOC, ImageNet, MS COCO, KITTI and Cityscapes has lead to tremendous progress in computer vision and machine learning over the last decade. Public leaderboards make it easy to track the state-of-the-art in the field by comparing the results of dozens of methods side-by-side. While steady progress is made on each individual dataset, many of them are limited to specific domains. KITTI, for example, focuses on real-world urban driving scenarios, while Middlebury considers indoor scenes and VIPER provides synthetic imagery in various weather conditions. Consequently, methods that are state-of-the-art on one dataset often perform worse on a different one or require substantial adaptation of the model parameters.

The goal of this workshop is to foster the development of vision systems that are robust and consequently perform well on a variety of datasets with different characteristics. Towards this goal, we propose the Robust Vision Challenge, where performance on several tasks (eg, reconstruction, optical flow, semantic/instance segmentation, single image depth prediction) is measured across a number of challenging benchmarks with different characteristics, e.g., indoors vs. outdoors, real vs. synthetic, sunny vs. bad weather, different sensors. We encourage submissions of novel algorithms, techniques which are currently in review and methods that have already been published.

Challenge Website | Back

ActivityNet Large-Scale Activity Recognition Challenge

This challenge is the 3rd annual installment of the ActivityNet Large-Scale Activity Recognition Challenge, which was first hosted during CVPR 2016. It focuses on the recognition of daily life, high-level, goal-oriented activities from user-generated videos as those found in internet video portals.

We are proud to announce that this year the challenge will hosts seven diverse tasks which aim to push the limits of semantic visual understanding of videos as well as bridging visual content with human captions. Three out of the seven tasks in the challenge are based on the ActivityNet dataset, which was introduced in CVPR 2015 and organized hierarchically in a semantic taxonomy. These tasks focus on trace evidence of activities in time in the form of actionness/proposals, class labels, and captions.

Challenge Website | Back

KDD Cup

SIGKDD-2018 will take place in London, UK in August 2018. The KDD Cup competition is anticipated to last for 2-4 months, and the winners will be notified by mid-June. The winners will be honored at the KDD conference opening ceremony and will present their solutions at the KDD Cup workshop during the conference. The winners are expected to be monetarily rewarded, with the first prize being in the ballpark of ten thousand dollars.

Challenge Website | Back

DJI RoboMaster AI Challenge

DJI started RoboMaster in 2015 as an educational robotics competition for talented engineers and scientists. The annual RoboMaster competition requires teams to build robots that use shooting mechanisms to battle with other robots. The performances of the robots are monitored by a specially designed referee system, converting projectile hits into health point deductions on hit robots. To visit past games and introductory videos visit https://www.twitch.tv/robomaster. To see the RoboMaster2018 promotional video, go to: https://youtu.be/uI2uoV58pzQ

Each team will build 1 – 2 automatic AI robots. Robots will compete in a 5m x 8m arena, filled with various obstacles. Participants will design robots that autonomously shoot plastic projectiles. The objective is outcompeting advanced official DJI robots in a battle of the wits.

Challenge Website | Back

Mobile Microrobotics Challenge

The IEEE Robotics & Automations Society (RAS) Micro/Nano Robotics & Automation Technical Committee (MNRA) invites applicants to participate in the 2018 Mobile Microrobotics Challenge (MMC), in which microrobots on the order of the diameter of a human hair face off in tests of autonomy, accuracy, and assembly.

Teams can participate in up to three events:

  1. Autonomous Manipulation & Accuracy Challenge: Microrobots must autonomously manipulate micro-components around fixed obstacles to a desired position and orientation superimposed on the substrate. The objective is to manipulate the objects as precisely as possible to their goal locations and orientations in the shortest amount of time.
  2. Microassembly Challenge: Microrobots must assemble multiple microscale components inside a narrow channel in a fixed amount of time. This task simulates anticipated applications of microassembly, including manipulation within a human blood vessel and the assembly of components in nanomanufacturing.
  3. MMC Showcase & Poster Session: Each team has an opportunity to showcase and demonstrate any advanced capabilities and/or functionality of their microrobot system. Each participating team will get one vote to determine the Best in Show winner.

Challenge Website | Back

Interspeech Computational Paralinguistics ChallengE (ComParE)

The Interspeech Computational Paralinguistics ChallengE (ComParE) series is an open Challenge in the field of Computational Paralinguistics dealing with states and traits of speakers as manifested in their speech signal’s properties. The Challenges takes annually place at INTERSPEECH since 2009. Every year, we introduce new tasks as there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. The Challenge addresses the Audio, Speech, and Signal Processing, Natural Language Processing, Artificial Intelligence, Machine Learning, Affective & Behavioural Computing, Human-Computer/Robot-Interaction, mHealth, Psychology, and Medicine communities, and any other interested participants.

Challenge Website | Back

Nvidia AI City Challenge

There will be 1 billion cameras by 2020. Transportation is one of the largest segments that can benefit from actionable insights derived from data captured by these cameras. Between traffic, signaling systems, transportation systems, infrastructure, and transit, the opportunity for insights from these cameras to make transportation systems safer and smarter is immense. Unfortunately, there are several reasons why these potential benefits have not yet materialized for this vertical. Poor data quality, the lack of labels for the data, and the lack of high quality models that can convert the data into actionable insights are some of the biggest impediments to unlocking the value of the data. There is also need for platforms that allow for appropriate analysis from edge to cloud, which will accelerate the development and deployment of these models. The NVIDIA AI City Challenge Workshop at CVPR 2018 will specifically focus on ITS problems such as

  • Estimating traffic flow and volume
  • Leveraging unsupervised approaches to detect anomalies such as lane violation, illegal U-turns, wrong-direction driving. This is the only way to get the humans in the loop pay attention to meaningful visual information
  • Multi-camera tracking, and object re-identification in urban environments.

Challenge Website | Back

Low-Power Image Recognition Challenge

Detect all relevant objects in as many images as possible of a common test set from the ImageNet object detection data set within 10 minutes.

Challenge Website (Old) | Back

The Look Into Person (LIP) Challenge

Developing solutions to comprehensive human visual understanding in the wild scenarios, regarded as one of the most fundamental problems in compute vision, could have a crucial impact in many industrial application domains, such as autonomous driving, virtual reality, video surveillance, human-computer interaction and human behavior analysis. For example, human parsing and pose estimation are often regarded as the very first step for higher-level activity/event recognition and detection. Nonetheless, a large gap seems to exist between what is needed by the real-life applications and what is achievable based on modern computer vision techniques. The goal of this workshop is to allow researchers from the fields of human visual understanding and other disciplines to present their progress, communication and co-develop novel ideas that potentially shape the future of this area and further advance the performance and applicability of correspondingly built systems in real-world conditions.

To stimulate the progress on this research topic and attract more talents to work on this topic, we will also provide a first standard human parsing and pose benchmark on a new large-scale Look Into Person (LIP) dataset. This dataset is both larger and more challenging than similar previous ones in the sense that the new dataset contains 50,000 images with elaborated pixel-wise annotations with comprehensive 19 semantic human part labels and 2D human poses with 16 dense key points. The images collected from the real-world scenarios contain humans appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions. Details on the annotated classes and examples of our annotations are available at this link http://hcp.sysu.edu.cn/lip/.

Challenge Website | Back

DAVIS Challenge on Video Object Segmentation

We present the 2017 DAVIS Challenge, a public competition specifically designed for the task of video object segmentation. Following the footsteps of other successful initiatives, such as ILSVRC and PASCAL VOC, which established the avenue of research in the fields of scene classification and semantic segmentation, the DAVIS Challenge comprises a dataset, an evaluation methodology, and a public competition with a dedicated workshop co-located with CVPR 2017. The DAVIS Challenge follows up on the recent publication of DAVIS (Densely-Annotated VIdeo Segmentation), which has fostered the development of several novel state-of-the-art video object segmentation techniques. In this paper we describe the scope of the benchmark, highlight the main characteristics of the dataset and define the evaluation metrics of the competition.

Challenge Website | Back

Tidy Up My Room Challenge

Robust interaction in domestic settings is still a hard problem for most robots. These settings tend to be unstructured, changing and aimed at humans not robots. This makes the grasping and picking of a wide range of objects in a person’s home a canonical problem for future robotic applications. With this challenge, we aim to foster a community around solving these tasks in a holistic fashion, requiring a tight integration of perception, reasoning and actuation.

Robotics is an integration discipline and significant efforts are put in by labs worldwide every year to build robotic systems, yet it is hard to compare and validate these approaches against each other. Challenges and competitions have provided an opportunity to benchmark robotic systems on specific tasks, such as pick and place, and driving. We envision this challenge to contain multiple tasks and to increase in complexity over the years.

Challenge Website | Back

Hearthstone AI Competition

The collectible online card game Hearthstone features a rich testbed and poses unique demands for generating artificial intelligence agents. The game is a turn-based card game between two opponents, using constructed decks of thirty cards along with a selected hero with a unique power. Players use their limited mana crystals to cast spells or summon minions to attack their opponent, with the goal to reduce the opponent’s health to zero. The competition aims to promote the stepwise development of fully autonomous AI agents in the context of Hearthstone.

Entrants will submit agents to participate in one of the two tracks:

  • Premade Deck Playing”-track: participants will receive a list of decks and play out all combinations against each other. Determining and using the characteristics of player’s and the opponent’s deck to the player’s advantage will help in winning the game.
  • “User Created Deck Playing”-track: the competition framework allows agents to define their own deck. Finding a deck that can consistently beat a vast amount of other decks will play a key role in this competition track. Additionally, it gives the participants the chance in optimizing the agents’ strategy to the characteristics of their chosen deck.

Challenge Website | Back

ICMI Eating Analysis and Tracking Challenge

The multimodal recognition of eating condition - whether a person is eating or not - and if yes, which food type, is a new research domain in the area of speech and video processing that has many promising applications for future multimodal interfaces such as: adapting speech recognition or lip reading systems to different eating conditions (e.g. dictation systems), health (e.g. ingestive behaviour), or security monitoring (e.g., when eating is not allowed).

  • We define three Sub-Challenges based on classification tasks in which participants are encouraged to use speech and/or video recordings:

    1. Food-type Sub-Challenge: Perform seven-class food classification per utterance
    2. Food-likability Sub-Challenge: Recognize the subjects’ food likability rating
    3. Chew and Speak Sub-Challenge: Recognize the level of difficulty to speak while eating

Challenge Website | Back

AutoML for Lifelong Machine Learning

In many real-world machine learning applications, AutoML is strongly needed due to the limited machine learning expertise of developers. Moreover, batches of data in many real-world applications may be arriving daily, weekly, monthly, or yearly, for instance, and the data distributions are changing relatively slowly over time. This presents a continuous learning, or Lifelong Machine Learning challenge for an AutoML system. Typical learning problems of this kind include customer relationship management, on-line advertising, recommendation, sentiment analysis, fraud detection, spam filtering, transportation monitoring, econometrics, patient monitoring, climate monitoring, manufacturing and so on. In this competition, which we are calling AutoML for Lifelong Machine Learninglarge scale datasets collected from some of these real-world applications will be used. Compared with previous AutoML competitions(http://automl.chalearn.org/), the focus of this competition is on drifting concepts, getting away from the simpler i.i.d. cases. Participants are invited to design a computer program capable of autonomously (without any human intervention) developing predictive models that are trained and evaluated in a lifelong machine learning setting.

Challenge Website | Back

Adversarial Vision Challenge

In this competition you can take on the role of an attacker or a defender (or both). As a defender you are trying to build a visual object classifier that is as robust to image perturbations as possible. As an attacker, your task is to find the smallest possible image perturbations that will fool a classifier.

The overall goal of this challenge is to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. As of right now, modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications.

There will be three tracks in which you and your team can compete:

Challenge Website | Back

TrackML: Particle Tracking Challenge

We are organizing a data science competition to stimulate both the ML and HEP communities to renew the toolkit of physicists in preparation for the advent of the next generation of particle detectors in the Large Hadron Collider at CERN. With event rates already reaching hundred of millions of collisions per second, physicists must sift through ten of petabytes of data per year. Ever better software is needed for processing and filtering the most promising events. This will allow the LHC to fulfill its rich physics programme, understanding the private life of the Higgs boson, searching for the elusive dark matter, or elucidating the dominance of matter over anti-matter in the observable Universe.

To mobilise the scientific community around this problem, we are organizing the TrackML challenge, which objective is to use machine learning to quickly reconstruct particle tracks from points left in the silicon detectors. The challenge will be conducted in two phases:

  • the on-going Accuracy phase May-Aug 2018 : favoring innovation of algorithms reaching the highest accuracy, with no speed concern.This phase has been accepted as an official IEEE WCCI 2018 competition (Rio de Janeiro, July 2018) This phase is hosted by Kaggle.
  • the Throughput phase Sep-Oct 2018 : focussing on speed optimisation. This phase has been accepted as an official NIPS 2018 competition (Montreal, December 2018). That phase will be hosted by Codalab.

Challenge Website | Back

Pommerman

The game is Pommerman, a variant of the famous Bomberman. There are four agents, power ups, and bombs galore in three modes. In FFA, enter an agent and be the last hero standing. In Team, enter a team of two agents that work together to beat the opponents. See our Github for detailed information on gameplay, observations, and actions.

Accomplishing tasks with infinitely meaningful variation is common in the real world and difficult to simulate. Competitive multi-agent learning enables this. Every game the agent plays is a novel environment with a new degree of difficulty. Of games that fit that description, Bomberman is a fun and intuitive one that people already love to play. Additionally, it is tenable for all participants as it’s not necessary to train with pixels.

Challenge Website | Back

InclusiveImages: A Challenge of Distributional Skew, Side Information, and Global Inclusion

Introduction

Questions surrounding machine learning fairness and inclusivity have attracted heightened attention in recent years, leading to a rapid emergence of a full area of research within the field of machine learning.

To provide additional empirical grounding and a venue for head-to-head comparison of new methods, the InclusiveImages competition encourages researchers to develop modeling techniques that reduce the biases that may be encoded in large data sets. In particular, this competition is focused on the challenge of geographic skew encountered when the geographic distribution of training images does not fully represent levels of diversity encountered at test or inference time.

How the Competition Works

Concretely, in this competition researchers will train on Open Images [2], a large, multilabel, publicly-available image classification dataset that has been found to exhibit a geographical skew, and evaluate on InclusiveImages, an image classification dataset collected with explicit inclusion goals, designed as a stress-test of a model’s ability to generalize to images from geographical areas under-represented in the training data.

In addition to the Open Images training set, competitors will have access to a large, open-source data set of textual information that may be useful in helping to provide additional information and context to aid a model’s ability to generalize to other geographical distributions. Competitors will be instructed to assume a geographic shift between training and evaluation data, but will not have all the details of what the shift is, mimicking the real-world situation in which a model may be deployed in an environment that is markedly different than it was trained, as is often the case when localities differ from global distributions.

How to Address Location Representation

Competitors should assume that locations that are over-represented at training may not have the same level of representation at test time, and that their models will explicitly be stress-tested for performance on images from some locations that are under-represented during training. Competitors will be able to validate their submissions on a validation set which has this quality, and then will be tested on a final evaluation set which is exhibits this quality in a different way.

Challenge Website | Back

The Conversational Intelligence Challenge 2 (ConvAI2)

There are currently few datasets appropriate for training and evaluating models for non-goal-oriented dialogue systems (chatbots); and equally problematic, there is currently no standard procedure for evaluating such models beyond the classic Turing test.

The aim of our competition is therefore to establish a concrete scenario for testing chatbots that aim to engage humans, and become a standard evaluation tool in order to make such systems directly comparable.

This is the second Conversational Intelligence (ConvAI) Challenge. The previous one was conducted under the scope of NIPS 2017 Competitions track. This year we aim to improve over last year:

  • providing a dataset from the beginning, Persona-Chat
  • making the conversations more engaging for humans
  • simpler evaluation process (automatic evaluation, followed then by human evaluation)

Challenge Website | Back

The AI Driving Olympics (AI-DO)

The Duckietown Foundation is excited to announce the official opening of the The AI Driving Olympics (AI-DO), a new competition focused around AI for self-driving cars.

The first edition of the AI-DO will take place in December 2018, at NIPS, the premiere machine learning conference, in Montréal. This is the first competition that will take place at a machine learning conference with real robots.

The second edition of AI-DO is already scheduled to take place in May 2019 in conjunction with the International Conference on Robotics and Automation (ICRA) 2019.

The main purpose of the competition is to probe the frontier of the state of the art in machine learning in the interactive and embodied setting.

Recent progress in Deep Learning, Machine Learning, and Reinforcement Learning has produced incredible results. This competition is designed to evaluate the real ability for these learning-based systems to control physical mobile robots.

Challenge Website | Back

AI for Prosthetics challenge

Welcome to AI for Prosthetics challenge, one of the official challenges in the NIPS 2018 Competition Track. In this competition, you are tasked with developing a controller to enable a physiologically-based human model with a prosthetic leg to walk and run. You are provided with a human musculoskeletal model, a physics-based simulation environment OpenSim where you can synthesize physically and physiologically accurate motion, and datasets of normal gait kinematics. You are scored based on how well your agent adapts to the requested velocity vector changing in real time.

Our objectives are to:

  • bring Deep Reinforcement Learning to solve problems in medicine,
  • promote open-source tools in RL research (the physics simulator OpenSim, the RL environment, and the competition platform are all open-source),
  • encourage RL research in computationally complex environments, with stochasticity and highly-dimensional action spaces.

Challenge Website | Back

EmoContext

When you read, ”Why don’t you ever text me”, does it conveys an angry emotion or sad emotion?

Understanding Emotions in Textual Conversations is a hard problem in absence of voice modulations and facial expressions. Our shared task, “EmoContext” is designed to invite research in this area. Whether you are an expert in this field, or trying to learn the ropes of Natural Language Processing or Deep Learning, dive in and participate! The shared task is also designed to encourage young researchers to get started, so just ask if you have a doubt, and we will help you out!

Task Description

In this task, you are given a textual dialogue i.e. a user utterance along with two turns of context, you have to classify the emotion of user utterance as one of the emotion classes: Happy, Sad, Angry or Others. 

Challenge Website | Back

Credits: Ankush Chatterjee, Microsoft AI & Research, India

Microsoft AI Challenge India

Search engines like BING employ AI to fetch faster and relevant search results. We are now moving towards a world, where we want answers to our questions, and not just weblinks to our queries. Participate in the Microsoft AI Challenge India 2018 to find the most relevant answer to a given question from a bunch of potential answers.

Challenge Website | Back

Credits: Ankush Chatterjee, Microsoft AI & Research, India

Comments