As you might have noticed, yesterday I didn’t post an entry. The reason for that is not that I didn’t want to write, but instead that I didn’t like what I wrote. As you might have guessed from the title, I decided to talk about the concept of Industry 4.0, one of my main research interests. However, after spending quite some time playing with my draft, I wasn’t convinced that I was offering a new perspective on the topic. Fortunately, after a needed break, I decided to refocus this post to discuss some of the reasons why I choose to research this topic. And some of the insights that I have developed after some quite some time working with this concept. But as always, let’s first describe what Industry 4.0 is and how it has evolved over the past years.
Industry 4.0
The term Industry 4.0 refers to a rising trend to increase the automation and interconnection of industrial processes, both inside organisations and outside among partners and clients. Industry 4.0 is an ex-ante term, which means it was introduced before it was wholly materialised. Even now, this term encompasses a wide range of strategies, policies and trends. It was launched in 2011 by the German Government a plan to increase the productivity, efficiency and flexibility of the German manufacturing industry. Since then, various countries, trade associations and even companies have developed their take the concept creating a wide range of initiatives associated with this term.
Alternatively, Industry 4.0 might be better recognised by the technologies that underpin its adoption. Such as the Internet of Things, Cognitive and Cloud computer (referred to under the umbrella term of Artificial Intelligence), Cyber-Physical Systems, Smart factories, Virtual/Mixed/Augmented Reality (VR/MR/AR), Connected and Autonomous Vehicles, 5G connectivity, and many others. Yeah, this is a long list, and that’s the reason why Industry 4.0 strategies could considerably differ among them. Notably, this has led to the proposition of wider-ranging trends that could encompass all the different perspectives; the Fourth Industrial Revolution and the Second Machine Age are, in my opinion, the better align to the objectives of Industry 4.0.
The fourth Industrial Revolution and the Second Machine Age
The concept of the Fourth Industrial Revolution was initially proposed in 2016 by Klaus Schwab, founder of the World Economic Forum. This so-called Fourth Industrial Revolution follows the technological evolution of production systems seen over the past centuries. The first industrial revolution was characterised by the adoption of steam engines and the shift towards mechanised production. The second industrial revolution brought electricity and the division of labour for mass production. The third industrial revolution was powered by the transistors and started the automation of production lines. Now, this fourth industrial revolution claims to extend the automation and interconnection capabilities, powered by pervasive digital technologies and easy access to the internet. Rightfully so, many argue that the latter is just an extension of the third. However, supporters of the concept claim that this revolution extends beyond production systems, and it has a profound impact on society and the environment. In his book on the topic, Schwab states that “we are at the beginning of a revolution that is fundamentally changing the way we live, work, and relate to one another”. This argument is reinforced by the notion of the Second Machine Age.
The Second Machine Age, proposed by Erik Brynjolfsson and Andrew McAfee, shadows the events of the industrial revolutions. However, instead of looking at the type of technology implemented, they consider the types of human activities that machines replaced. During the First Machine Age, starting with the first industrial revolution, we have seen how technology has replaced humans performing physical activities. Unsurprisingly, humans aren’t suitable to perform arduous, dangerous or repetitive tasks, so this isn’t as bad as it sounds. Machines make the perfect substitute for this type of activities, and over the centuries, we have increased their capabilities to assists, humans, in countless tasks. Now, with the rise of computer’s processing power and connectivity capabilities, in the Second Machine Age, we are starting to see machines replacing humans performing cognitive tasks. Machine learning, natural language processing, image recognition, computer voice, and many other technologies, all cluster under the banner of Artificial Intelligence, are examples of the cognitive tasks that we once thought only humans could do. Now, we use them every day in the form of virtual assistants, directions to travel, suggestions on what to watch or listen next, and so on.
Human vs Machine agency
I feel that I extended too much on the “introduction” to this topic. Still, I think it is essential to know where these concepts come from to understand where they might go next, and among the various aspects that I consider in my research, agency is one of the most interesting ones. Quick side note here, agency refers to the capacity of an actor to decide, entirely on their own, what to do. In a way, it is similar to the concept of free will. However, in the case of agency, it is not just a matter of having it or not. Instead, agency seeks to describe what could create it, affect it, or take it away.
Now, with the technological progress that we have witnessed over the past decades, it might be warranted to ask, where is the line that divides machine from human agency? This question lies even behind trivial activities, for example, when do you stop watching a video because that’s what you want and start to watch it because that’s what a computer is telling you to do it? YouTube, Netflix, Amazon, anyone? Or why take that route to work, or listen to that new song, and so on. No doubt, machines are getting very good at knowing us. However, my objective with this post is not to go into profound philosophical questions or conspiracy theories. Instead, I want to focus on a straightforward way to deal with the situation.
Our agency comes from our understanding of the world and the objects around us. We don’t feel that we lose agency because we can’t fly. We understand that we are bound to a set of physical rules that define the nature of our world. In the same way, we gain agency over machines when we understand what precise technology is doing and how is it doing it. This effect is evident when people dislike new technological artefacts, even when its capabilities could make their life easier. However, this might not be their fault. Often, designers of such devices are more concerned with designing a viable product than empathising with users. This mistake is further highlighted when the users themselves don’t understand why they want a particular technology.
As I have stated before, I am a technology optimist; however, I don’t believe that technology is the solution to all our problems. Technology is a tool, and we need to understand what, how and why we want to use it. In that sense, Artificial Intelligence is a powerful tool. It uses all our information, which we willing to provide, to make educated guesses about what we might do next. However, when we let artificial intelligence become a black box, there is an imminent risk of, put it mildly, miscommunication. As the saying goes, we need to be careful what you wish for. And that starts by understanding what technology does and could do.
As you can see, this is a fascinating topic and a fundamental notion in my research. But since I am again overextending, I will leave it here, and come back to the issue in a future post. In any case, thanks again for reading, and reach out if you any comments or questions.