Our democratic voting system is heralded as one of the greatest achievements of modern civilization. Wars have been fought, blood has been shed and many have died to ensure our right to vote. Yet, we need not look further than the daily news to see the serious threats democratic voting systems face. From foreign powers and nefarious agents to social media exploits, opaque meddling in the affairs of other countries is more prevalent than ever before. This has pushed our hard-earned democracy onto a slippery slope to authoritarianism. Technology is the common denominator behind all the recent forces putting democracy at risk. The creators of a technology can never foresee the unintended consequences their inventions can have years or decades later. Voting has been forever transformed by both technology and its abuse. Can the complex challenges be mended by changes in the political process alone, or must we also seek a technology component to the solution as well, to counterbalance the technological genie that has been let out of its bottle? In this book we investigate a novel concept called a Democratic Quality Vector (DQV), which is a technological tool which can help mitigate some of the serious challenges that plague modern democratic systems. At the same time, the DQV also has implications for information systems and improvements in the quality of decision-making in general.
If we closely examine the nature of these challenges which our democratic voting systems face from a data science perspective, we can characterize them as issues of two things: data integrity and trust. When bots create fake news, resulting in the false impression that a large number of people believe in, that is a data integrity and trust issue. When social media accounts are siphoned off and psychological profiles constructed to identify voters vulnerable to targeted manipulation, that is once again a data integrity and trust issue. When propaganda is mistaken for truth, that is a data integrity and trust issue. A more colloquial word for data integrity is truth.
The background story that leads to the discovery of the new DQV begins with a deep analysis of our most fundamental assumptions about truth. The question of how to make truth and trust resilient in the information age led us examine fundamental philosophical questions such as “What is knowledge?”, “How do we know when something we know is true?” and “What establishes if something is true or not?”. That in turn led us to open a can of historical worms. Today, the scientific method is the definitive technique for seeking knowledge and knowing what is real in the world. In history, the scientific method is closely related to rationalism, the fundamental concept that brought about the Enlightenment, the Industrial Revolution and led to the overthrow of a number of heads of states throughout history. Rationalism is the foundation of science, and of our modern society, but we puzzled at what has become of intuitive knowledge. Certainly we all still employ it, and intuitive knowledge has played an important role in much of human culture up until recently. Rationalist theories of scientific research have traditionally discounted intuitive knowledge as unreliable. But this view is beginning to change, paradoxically, because of rationalism itself. It is only recently that serious neuro-psychological research into intuition has begun to reveal its underlying operating mechanism. Intuition is a refined form of species instinct and biological systems are demonstrably at the root of much of it. The field of interoception is one of those that create the bridge between intuitive knowledge and the underlying biological systems that moderate the intuitive signaling. Rationalism is beginning to finally accept the legitimacy of intuitive knowledge, but only after its methodology has finally asked the right questions to reveal its secrets. The DQV is built on the logic of this new knowledge, and is designed to quantify intangible data, transforming it into a tangible proxy that can be used for rational decision-making.
We also take an excursion into psychoactive compounds because they have always played an interesting role in shaping human culture. They have always been a significant part of culture all throughout history and their mind-bending effects is bound to have implications for our perceptions of reality, ability to make new discoveries and even definition of knowledge. Because they allow us to experience the world in an entirely different way, they result in experiences that can have profound impact on knowledge creation and our understanding of the universe. These alternative experiences can shift our framework of what is true or not.
In politics, data integrity is tightly entwined with that other important concept, trust. For voting is an expression of trust in a representative to perform as promised. But powerful and ubiquitous information technology has exposed the Achilles Heel of representational democracy – the easy manipulation of data. It is this vulnerability which can easily concentrate political power into the wrong hands. A close examination of our most basic epistemological assumptions about these two fundamental concepts underpinning democracy, truth and trust leads us directly to the consideration of reliable, new way to value “one unit of social capital“ in particular, and a new way of measuring the value of one piece of information in general. We call this the Democratic Quality Vector (DQV).
After the foundations of our measurement has been established as a baseline, we then propose our solution. Our solution is a new psychometric that is derived from the Laws of Biology. In the process, we apply this emergent metric to new more suitable forms of voting systems. We introduce a new voting system called the transferable voting system that builds on top of liquid democracy. As we explain in the foundational section, the fundamental mathematics behind the transferable voting system emerged out of applications of a new type of mathematics which is ubiquitously used throughout nature and has been used to organize and explain a full spectrum of natural systems including physical, chemical, biological and social systems. This gives the system a solid foundation upon which to rest.
However, this does not mean there aren’t many questions concerning how to approach the design of a transferable voting system. How would it overcome the limitations of our current system? What is the best way to interpret accuracy, magnitude, and direction of a transferable vote vector? How do we assign a meaningful value to the magnitude of the transferable vote’s social capital? Could we assign any value we want to it? If the unit of social capital is decreasing with each successive transfer, Does the sequence converge to zero? How do we assign weight to a voter in the chain? Who has more weight, an academic, a senior or a plumber? Can we trust the information which we use to make our decisions? Who is the best to assess the validity or truth of a statement, report, publication? Who decides what the “cold hard facts” are?
As we answer all these questions, we are led to construct a final transferable voting system called a Vector-Parametrized Information System (VPIS), and a special instance of VPIS for voting called a Vector-Parametrized Voting System (VPVS) which incorporates all the properties necessary to compete with our current representative democracy system.