I am currently reviewing a book on machine learning (Python Machine Learning Essentials - Packtpub) and with the recent news of an production line robot picking up an employee and crushing him, i cannot help think of the potential danger we are unleashing. It must be remembered that machines are driven by algorithms that are thoroughly understood and in ideal conditions entirely deterministic. An herein lies the danger. The assumption is that, although conditions (the physical operating environment and the extended information network it may be connected to) are never ideal they are so close that it does not matter. There are at least two problems to this assumption. Firstly in a complex network very small changes can be amplified to have large consequences (the butterfly effect). Secondly there are malicious, or lazy, humans. 

The challenge is to build safeguards to control for such things. In the first instance, small random events being amplified, we have a very good example of of how nature actually capitalizes on this phenomena. Evolution. As for lazy humans .. who knows?