When parents suspect a neurological disorder in their child, they will be able to film it and the algorithm will diagnose it. Monika Redzisz talks to Łukasz Kidziński, Stanford University

Monika Redzisz: ”The application of modern data analysis in biomechanics”. This is what you officially do at Stanford University. Meaning: what exactly?

Łukasz Kidziński, Ph.D.*: Movement analysis, specifically – the analysis of how a patient walks. Cerebral palsy, stroke. Parkinson’s disease and many other neurological diseases cause impaired movement. The quantitative assessment of the movement has key significance in therapy. It’s important, particularly in children. It’s best to diagnose them when they start taking their first steps.

There is a costly method available today called the optical motion capture. Markers are attached to a patient’s body who walks around the room in which the cameras register the position of markers. On that basis, it’s possible to capture the anomalies in movement and define their deviation from the norm. Unfortunately, this study costs around two thousand dollars and not many can afford it.

I proposed a method to define the movement parameters with the help of artificial intelligence. We record a video where a patient walks through a room. The neural networks analyze the position of the body frame by frame: left knee, right knee, left ankle, right ankle, left hip, right hip – and they create a time sequence of these points. Then, the second neural network uses these time sequences to make a diagnosis and create the metrics for the physician. For example, the range of knee movement is an important information, because some patients with cerebral palsy can’t fully bend the knee, they have a shorter thigh muscle.

How did you train the algorithm? Where’s the data from?

We had access to a large database in a children’s hospital in Minnesota where, as it turned out, the patients had been recorded for thirty years to make it easier for physicians to make a diagnosis. We have really good results: 93 percent efficiency in anomaly detection. One parameter that reducec accuracy is the position of the ankle joint, because you simply can’t see it in a video. Only this reduces the accuracy of the results to ninety three percent, but it’s still high enough to use the algorithm clinically. We already have a prototype: we are trying now to publish our work on the subject in a renowned scientific journal.

How will the implementation look like?

Everyone will have the application on their phone, probably from a generally accessible, free source (open source software). When parents suspect their child has a neurological disorder, they will be able to film the child, record the way it walks, and the algorithm will make a diagnosis. It will be the initial diagnosis, of course.

We suddenly realize that we’ve been doing something evil throughout the decades – we’ve been discriminating against women and African Americans, because our algorithms have had to learn it from us

I suspect that most parents will be given a recommendation to contact a physician. Not because the neural networks may not be working precisely, but because the results will have to be properly interpreted. It’s always going to be better to visit a doctor with our report, rather than making the final decision ourselves.

What do you do in connection with the Google Street View project?

It started with a Ph.D. thesis project of my colleague, Kinga Kita-Wojciechowska from the University of Warsaw. She works on the risk analysis methods in car insurance and she has invited me to collaborate on the project. Insurance companies often define the level of risk based on a postal code of a insuree. If they live in a ‘’bad’’ district, they will pay higher insurance premiums, because the likelihood of a car being badly damaged increases. But there is big variability of risk within one postal code – some addresses are ‘’better’’ and other ones are ‘’worse’’. Additional information can surface which will have an impact on one place, but not on the other.

If I live above a liquor store, then the risk of my car getting damaged increases.

That’s exactly right. The same applies to an apartment insurance risk against burglary, etc. We wanted to go down to the level of a specific address, and not just a postal code. We wondered what kind of information can be used to predict risk and where to get it from. We came to the conclusion that how a particular building looks like can be a proxy for their social status and insurance risk. It turned out people who live in less well-kept houses report damage more often. We still use a postal code in the model which takes into account whether it’s central Warsaw, or a little town, but we also differentiate between well-kept and neglected buildings.

It can be really deceiving. A worn-out tenement building where apartments cost exorbitant sums of money, meaning wealthy people usually live there, will be evaluated as less risky than a newly painted residential block built in the 1960s.

Yes. Outliers are a standard problem in statistical methods. Insurance firms set higher insurance premiums for people who drive red cars. This is so, because statistically speaking it’s an indicator of aggressive driving – which obviously doesn’t mean that every owner of a red automobile drives aggressively.

Still, this method discriminates against the less financially fortunate.

Indeed. There’s another aspect to it in the States – the racial aspect, because a correlation between a district and the state of the buildings in that district still exists there. The Google Street View isn’t some innocent tool. It lets extract information on every one of us, without our consent. For us scientists this is an interesting theoretical issue, but on the flip side we were given this data from a particular insurance company that wants to make money on it. Many insurance companies are already trying to use these methods, no doubt.

Do you often come across discriminating algorithms?

Yes. A model is usually a representation of data which is used in that model. It’s interesting that we start to dwell on it only when we see the mistakes of the algorithms. Kind of like parents who see a reflection of themselves in their children’s behavior. We suddenly realize that we’ve been doing something evil throughout the decades – we’ve been discriminating against women and African Americans. Our algorithms have only had to learn it from us. I think that, in this particular aspect, the situation is far worse in Europe than in the States; they talk a lot about it in the U.S. Europe is still not mature enough for such discussions.

You live in the Silicon Valley, you work at Stanford. Do you have a feeling that you live in an elite enclave?

Yes, it can be felt. We build models at the university which will be able to more accurately predict the progression of a disease. We have a feeling we’re saving the world. We sometimes forget there are places in the world, where people have no access to running water and the scale of their problems is much bigger from those which we face. But it doesn’t just apply to the Silicon Valley. I feel the same way when I go back to Poland. Actually, Poland has already joined the elite places of the world, if we compare it to the eighty percent of places across the planet. But most of us just don’t realize it.


*Łukasz Kidziński, Ph.D. mathematician and computer programmer. He graduated from the University of Warsaw. He obtained his Ph.D. in mathematical statistics at the Universite Libre de Bruxelles. He then joined a CHILI group (Computer-Human Interaction in Learning and Instruction) in Lausanne, where he worked on human-robot interaction. While there, he co-founded DeepArt – the neural style transfer company and platform on which the neural networks can process any photograph into a painting by applying a personally chosen style. He has been working at Stanford University, USA, for three years on the intersection of computer science, statistics and biomechanics. He is a member of the Mobilize Center team which works on data analysis and its application in healthcare.

Przeczytaj polską wersję tego tekstu TUTAJ

Skip to content