, , , , , , , , , , , , , , , , , , , , , , , , , , ,


I was at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) the other day observing their driverless vehicle compound. I remarked to one of the scientists about how difficult it is to choose a career than is future proof against the inexorable rise of automation. The conventional wisdom is that we should be encouraging students to do STEM subjects. Readers of my previous blogs will know I don’t entirely subscribe to this convention. In fact I’m thinking that the real future might well lie in concentrating on the humanities.

I posited to the scientist that they might need to employ an ethicist and a philosopher in the near future. He looked bemused. While I was being playful there was a serious side to what I was promoting. Follow my reasoning. Driverless cars require technology and software to make them work effectively and safely. It’s this last hurdle that must be cleared if allowing or using driverless cars gets given the green light. It’s about social license. Until the public as a whole are comfortable, technology can’t get mainstreamed. Google Glass being a good example. Career advisers would jump in here and point to my apparent contradiction. Surely driverless cars mean the key job skills of the future are in STEM subjects which can provide the workforce with the skill sets of coding, robotics, engineering etc.


The more the dispassionate application of science (‘because we can’) approach becomes our societal norm, the more we need philosophy and ethics to provide a counter balance. Let’s take driverless cars as an example. I put it to the CSIRO scientist that the decision a machine will need to make on some occasions in the driverless environment will be between harm and less harm. Imagine a scenario where there are only two ‘escape routes’ in an accident event. The vehicle may be confronted with say one avenue which is hitting a bus load of old people out on a day trip in one direction, or a bus load of school children in the other direction. Presented with these two options the car needs to make the decision based on mega fast computation which will itself be based on a set of codified rules. It’s cognition computing at its finest. Once again we are back to the coders. But just how will they make those decisions? Where this gets tricky is where the above scenario gets nuanced. Let’s say the bus with the ‘oldies’ is in fact a group of past-retirement age, but still working (Scott Morrison would be pleased) Nobel scientists working on a cure for childhood cancer and the school bus has a group of terminally ill children whose cancer could be otherwise cured by the other bus’s occupants. Much more complex. Would you want your freshly graduated coder writing the code on this one?

Only an ethicist/philosopher has the wherewithal to really give us the right steer here (pardon the pun). When coding up some of these decisions, deliberate and sometimes Boolean choices will need to be made explicit. There will exist somewhere in the Cloud, a set of rules that can be viewed. A sort of ‘value of life block-chain’ for want of a better description will be created. Surely the public has the right to input to these decisions. This might seem far-fetched but it’s coming as are driverless vehicles.


That had me reflecting. Where else might a philosophy graduate add value in the workplace? I’ve been reading Alec Ross’s new book The Industries of the Future. He looks not entirely optimistically at the impacts of digitization on the future and what jobs might be gained and lost in the process, as well as what countries might gain and lose in the transition that is already underway in the digital revolution. He talks about demographics and the rise of robotics in particular in Japan. Culturally Japanese society, unlike the West, has looked after their elderly within the family unit. With a declining young population this is no longer an option for many. Not surprisingly the tech-smart Japanese industries have come up with a solution. Robots. The Japanese are pretty advanced and I think most are familiar with Honda’s Asimo which seems quite lifelike even though clearly a robot. Asimos are now capable of looking after the physical care needs of elderly Japanese patients. Increasingly, through machine learning and cognition computing, they are advancing to becoming able to deal with the psycho-social needs of the patients as well. All a good thing right?


Not necessarily so. The imparting of wisdom accumulated from years of success and failures is a feature of all societies especially in Japan. What we now confront with Asimo is a break in a key anthropological aspect of society where the young learn from the old. There is a clear philosophical component to be considered here before we rush headlong into a robotic solution. The presence or not of some automation should not be limited only by the fact that the ‘technology isn’t there yet’.

One of the reasons robots haven’t featured at the top end of society yet is the time it takes for an individual robot to learn. With cloud computing robots are learning from the collective experience of just not themselves but all the other experiences of the other inter-connected robots. This clearly has an ethical/philosophical component as well. If ultimately the children of busy working parents are going to be raised by robots, do we really want the wisdom passed to our next generation to be a synthesized fusion of collective experience? Presumably robots learning at a global level and at digital speed are going to make fewer and fewer mistakes. Some of the best lessons I’ve learned have been what not to repeat from earlier mistakes I have made. To not have access to this wisdom, borne of the school of hard knocks, may have profound impacts on future generations and we may well not realise this until too late.

But all this is over the horizon. We might assume we have time to get it right. Well I don’t think so. A skills shortage of philosophy graduates right now would suggest we need to be encouraging bright young students in our schools to take up a contemplative life where thinking for the sake of thinking is the main component of the position description. We need such minds now. There are moral and ethical issues with stem cell research which are obvious. Others that readily spring to mind are free trade agreements, food additives, ethical investing, DNA sequencing etc.


There are subject matters that mightn’t be so obvious that would definitely benefit from a philosophical approach. Zika virus is one that strikes me needs careful consideration. We have the technology now to eradicate the most dangerous animal in the world – the mosquito. No brainer really. Think of how much suffering we could avoid by having no more malaria, dengue and zika. Pause for a moment and consider from a philosophical perspective though. According to Marian Blasberg, Hauke Goos and Veronika Hackenbroch writing in the Der Speigel one of the main reasons that we have the Brazilian rainforest still largely protected from development is the scourge of the mosquito. Without those green lungs in reasonable order the world would be a much worse place with arguably just as much misery as the mosquito ever caused. This is an issue for us right now as scientists are hitting the field with genetically modified mosquitos in the back of their Land Rovers. Who are we to deliberately decide the extinction of an entire genera?

At work we have recently introduced a journals club to put into a proposed Research, Reading and Reflection Room (3Rs for short). The usual suspects are there e.g. Forbes Magazine and Harvard Business Review. One that might raise a few eyebrows, but is I think an essential read for modern managers, is New Philosopher. In case the auditors think I’m getting profligate I will leave it to master philosopher Confucius to justify the subscription. ‘Learning without thought is labour lost. Thought without learning is perilous.’