AI and Big Data Expo

2018

I recently started on a learning program provided through work (Government Digital Service), to better understand emerging technologies, with a focus on Artificial Intelligence (AI) and Machine Learning.

For 10 weeks I am studying for 2 days a week. The learning is fairly self directed, though once a week I meet with my brilliant mentor Ivan who is a lecturer in AI and Data Science at The University of Bristol. He is also a fellow at The Alan Turing Institute which is the national institute for data science and artificial intelligence and is based in the British Library.

First things first, some terminology:

Algorithm

Think of an algorithm as a set of instructions… inputs that result in outputs… an example could be a cake recipe.

Artificial Intelligence

When a machine performs tasks that would usually require human brain power to accomplish.

Data Science

Turning data into useful information. The study of data science brings together researchers in computer science, mathematics, statistics, machine learning, engineering and the social sciences.

Machine Learning

A subset of Artificial Intelligence. It is based on writing computer algorithms (sets of instructions given to a computer) that can learn from information they have previously processed in order to generate an output.

Systems appear ‘intelligent’ because they can adapt to different situations based on what they have learnt/seen before.

I found the FAQ of the Alan Turing Institute gave a great introductions to these terms. Their website in general is great for gaining an understanding the different areas that AI encompasses.

Example uses of AI and machine learning include:

  • Fraud detection
  • Smart homes, where decisions can be made based upon factors such as energy consumption or perceived home safety
  • Connected and self driving cars
  • Sentiment analysis (for example analysing if a review is positive or negative)
  • Managing workloads of computer systems (Google Deepmind reduced energy used for their coding data centres by 40%)
  • Health care – helping doctors with diagnosis
  • Recruitment – sourcing candidates and interviews with chatbots
  • Predicting vulnerability exploitation in software
  • Financial market prediction 
  • Accounting and Fintech – automating data entry and reporting
  • Proposal review – reviewing contracts, cost, quality level
  • Voice assistants

Last week, I managed to catch a panel discussion on the subject ‘AI for Social Good’ at the AI and Big Data Expo. The Head of Programme, Digital Commission of the disability charity Scope gave some interesting examples of AI being used for social good.

She spoke about how in New York, screens that people can interact with using sign language are being trialled and installed on buses to improve accessibility. Microsoft are developing ‘Seeing AI’ used within a text recognition application designed with people that are blind. Really excitingly, The National Theatre and Accenture have developed Smart Caption Glasses. They are a way for people with hearing loss to see a transcript of the dialogue and descriptions of the sound from a performance displayed on the lenses of the glasses.

The panel also discussed how although the design focus of these AI applications may have been for people with specific disabilities, they will benefit many others. Somebody could be holding a baby and suddenly they wouldn’t be as mobile as they were before. It is a shame that a business argument to design accessibly would need to be made, but designing for people with specific needs shouldn’t be viewed as designing for a small subset of users as the benefits will cascade. Thus designing accessibly shouldn’t be an afterthought.

Of course it isn’t all good. Systems learn based on what they have seen before. Society is inherently bias against minority groups. If we let this (to name a few) racist, sexist, transphobic view of the world run through our systems and then rely on those systems to make predictions based on this information, then we are only going to amplify bias. The people developing the systems need to do so with this in mind. Machine learning models should be made transparent where possible.

There is growing concern that some job sectors will be replaced with AI. If you work in a job that involves solving lots of problems and a high level of human interaction then you will probably be less at risk. If you are a train driver, mortgage advisor or stock market trader then it is quite possible you could be affected one day.

Another concern is data privacy. As users, we want our data to be kept safe. It’s no secret that tech companies largely profit from selling on information. We take this as the trade off for not paying to use their platforms, but are not very confident in how our data will be used or what the limits of surveillance are. On the flip side, to train Machine Learning systems so that they can make accurate predictions, we want lots of data. This is ok if you work at a large corporation with a lot of access to user data, but if not there is a reliance on collecting it yourself of using open data sets.

In 2018 The Department of Culture, Media and Sport released The Data Ethics Framework. The framework sets out clear principles for how data should be used in the public sector. It will help us maximise the value of data whilst also setting the highest standards for transparency and accountability when building or buying new data technology. Many open data sets are available on https://data.gov.uk/.

Some examples of how public sector organisations are implementing emerging technologies, including AI and machine learning have been presented here by the Innovation Team at Government Digital Service. Projects range from anomalous ship detection to resource allocation of fire engines to predicting people in crisis. A visualisation of the research can be found here and can be filtered in various ways.

Leave a comment