By Maria Koulourioti,
Should we be concerned about neurotechnology?
One in every eight individuals globally has a mental or neurological condition (IHME, 2019), which accounts for one-third of healthcare costs in wealthy nations and is becoming a significant burden in low-to-middle-income countries. Neurotechnology can bring innovative therapies as well as preventative and therapeutic solutions for people all around the world. This technology, however, creates significant ethical considerations. Neurotechnology, unlike many other cutting-edge technologies, can directly access, modify, and replicate the structure of the brain, which is important to concepts such as human identity, freedom of thinking, autonomy, privacy, and well-being. The rising ability to manipulate the brain, and hence the mind, in an intrusive and ubiquitous manner necessitates new concerns.
The word “neurotechnology” refers to any method or instrument that interacts with the nervous system to monitor or control brain activity. Neurotechnology can be used for human capacities research, diagnosis, treatment, and enhancement. The use of neurotechnology raises several ethical concerns, including privacy, autonomy, fairness, identity, accountability, and social justice.
Undoubtedly, neurotechnology is not a new concept. For more than 50 years, scientists have been researching how to implant and use electrical devices in our brains. However, recent advancements, as well as increased overlaps with AI, have made arguments about its merits and downsides more essential. On the positive side, electrical stimulation of certain parts of the brain has proven useful in the treatment of Parkinson’s disease and epilepsy. Other neurotechnological techniques have been effective in treating Alzheimer’s symptoms.
The advancement of neuroscience and neurotechnology creates new quandaries for human rights, particularly the right to freedom of thought, because new technologies may allow access to brain activities from which inferences about individual thoughts can be drawn. As a result, the fundamental principles of inalienable mental privacy are called into question. Second, certain neurological findings raise problems regarding the basic legal idea of free will, and hence of legal responsibility and accountability. If free will is regarded as a human fiction, the individual cannot be held accountable for their acts, and as a result, he or she is not legally prosecutable. As a result, our entire legal system might be called into question.
Ethical concerns arise from the use of neurotechnology regarding personal identity, autonomy, privacy, fairness, responsibilities, and last but not least, social justice. Thus, a range of significant questions is set out. Do current human rights frameworks provide appropriate protections against the hazards posed by unrestricted neurotechnology research and use? Or do we need a new set of neuro-specific rights to defend human rights and dignity from any negative influence, such as the right to cognitive liberty, mental privacy, mental integrity, and psychological continuity? For instance, are all the unfiltered thoughts of the human brain to be put on display for communication via neurotechnology?
We ought to ensure that neurotechnology does not interfere with the autonomy and agency of decision-makers based on their brain data. In continuation, when discussing the right to privacy, it is equally important to prevent the exploitation and dignity of individuals who undergo alterations in their brain function. Due to the impacts of neurotechnology, we must learn to assign responsibility and accountability for its outcomes, not only on individuals, but society as a whole.
Neurotechnology can reshape our notion of who we are and exploit the power of the brain to increase well-being, from medication that improves the quality of life to brain imaging that revolutionizes our conception of human consciousness. The possibilities for neurotech applications are endless. They have the potential to increase learning in classrooms. In everyday purchasing, they can assist in connecting our inner desires to the things on the shelf. Through features like thought-to-text authoring, as well as virtual and augmented reality systems that are supported by brain control for entertainment reasons, computer power and possibilities, might be enhanced.
However, like with any technological advancement, if utilized incorrectly, neurotechnology can do more harm than benefit. This technology presents questions of human identity, freedom of thinking, autonomy, privacy, and flourishing since it actively interacts with and affects the human brain. One example is the possibility of unauthorized access to sensitive information stored in the brain. Already now, brain data is in high demand for commercial applications such as digital phenotyping, emotional information, neurogaming, and neuromarketing. The industry has created neuromarketing units to analyse and potentially change customer preferences, posing severe issues about mental privacy. These dangers can also cause significant issues when working with non-democratic administrations.
The public should emphasize that each individual is the owner of the data collected about them and that it can only be used, published, or traded with explicit informed consent. The development of better and more systematic national frameworks for facilitating meaningful education, engagement, and empowerment efforts related to neurotechnology is a prior engagement. We must become aware of the possible advantages and disadvantages of neurotechnology, particularly where it affects personal integrity, shapes perception, or prompts decision-making, and we must participate in public discourse and take additional steps to look into potential abuses. As for the media, their responsibilities lie in dispelling the myths about neurotechnology related to new developing technologies and educating the public about them by sharing and confirming the veracity of the provided scientific knowledge. Therefore, the media may assist the general people in selecting what can be accepted and what should not. This will be especially helpful in low-income nations with high rates of illiteracy and limited access to the Internet. to communicate with the community in a language they can understand, allowing them to comprehend what they were unable to grasp through other means.
As Francesca Rossi, IBM’s AI Ethics Global Leader, explained: We need to learn how to update the values, principles, frameworks, and tools to involve neuroscience and neuroethics experts in AI ethics venues, and to engage with those people most likely to be affected;
REFERENCES
- UNESCO, International Bioethics Committee, 2022. Ethical issues of neurotechnology, (adopted in December 2021) (pp. 4–11, 47–50, 67–70). Available here
- ITU News (2022). Can ai principles make neurotechnology more ethical?, ITU Hub. Available here