"Slides & Thoughts from the Playfair AI Summit in London" in Humanizing Tech
Machine Learning is here to stay, and Data is all the matters
Playfair AI Summit
Last Friday in London, about 350 people came together to discuss the state of machine learning and AI in startups. It’s run by Nathan Bench who also publishes a fairly regular email newsletter about the state of AI, which I recommend subscribing to.
You may not have heard of this conference because it’s the 2nd year, but I think it will start to get bigger over the years. It was sponsored by Bloomberg and while I wasn’t able to make the flight this year, one of my colleagues, Paul Mardling, was able to attend and he shared some thoughts:
- Stating the obvious, this area is hot, lots of VC’s looking for investment opportunities
- Being in the City of London lots of fintech startups in attendance
- Seems to be a big battle on the academic side between those taking a pure mathematical algorithmic approach and those taking a neuroscience ‘lets reproduce the human brain’ approaches. Lots of comments that humans are great pattern recognition machines but almost useless on any inputs having more than 7 or 8 independent parameters.
- Seems to split into a camp trying to be ‘as good as’ humans vs ‘better than’
- Need to assume (and potentially add) noise in the inputs to model anything real world
- Worry that academia is spending too much time working on the same data sets and this is over focussing people
- From the commercial side a feeling the current available algorithms are ‘good enough’, no real intrinsic values in the algorithms anymore (they’re free)
- Input data is still the biggest issue, everyone would take more data over better algorithms
- Real split on the commercial side between those happy with a ‘black box’ algorithm that they don’t need to understand and those willing to use slower/more expensive algorithms in order to be able to have a level of insight into why a particular result was given
- Choice of algorithms and configuration is still as much an art as a science, people are working on tools to try and help with this
- Monitoring failures is a real issue, e.g. if a movie isn’t recommended how do we know that the user wouldn’t have actually liked it?
- Ethics is going to be important, the algorithms will (already do) have a lot of power who is responsible for their decisions?
Scariest concept of the day, one of the neuro-ML people claiming they’ve analysed the neural pathways involved in depression and want to run trials with a video that they claim is designed to train the neural pathways to remove the depression.
Here’s a link to 9 slide decks that were discussed during the conference:
It sounds like they’ll also be releasing some video recordings in the coming weeks, which I’ll update this post with once they’re available.
Slides & Thoughts from the Playfair AI Summit in London was originally published in Humanizing Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.
from Sean Everett on Medium http://ift.tt/29DKPbF