Is there a reality out there?

Human beings love stories. We thrive on them, they evoke our emotions and give us reason to live (or die for that matter). Stories are what give us sense and meaning.

Stories are also almost absolutely subjective, at least in terms of their final impact on individuals. The final act of every creation—stories included—happens in the mind of the recipient, and it is the stage upon which the storyteller has no control at all. It is the sum of our experiences that does decide what our perception of the story will be. Therefore there are as many variants of a single story, as there are people on the planet. And from this perspective, who is to say which story is better?

This kind of reasoning brought us the idea of post-modernism, where even the concept of reality itself begun to be treated as subjective to one’s interpretation, and wholly dependent on one’s point of view. In its peak, this philosophy proposed that we in fact create the very reality by our mental processes, and that by changing the way in which we think and perceive the outside world we can totally remake it, because it is nothing more than a narrative. It also proposed the ultra-egalitarianism, postulating absolute equality of everybody’s view of the world.

I’ll spare you the paradoxes of post-modernistic philosophies, and listing the flaws of this kind of unfortunately quite prevalent magical thinking, but there is one thing that I personally can’t ignore. The uber-egalitarianism proposes that science is “just another piece of storytelling”, and that it has no special claim on saying what’s real, and what’s not. This point of view saddens me, especially when it is voiced by famous psychotherapists or people who really should know better.

This statement is very, very incorrect. While I absolutely agree that each of us creates their own inner picture of reality, and then by our actions we can influence the outside world, sometimes even make an important contribution towards some big changes, it is by no means equal to creating the outside reality, or to claiming that there is no objective reality at all.

There exist certain rules that every atom in the universe does seem to follow. These rules can be discovered by systematic observation, and they even can be described with the use of an abstract language of mathematics. Even such seemingly chaotic and stochastic systems like weather can be described with certain probability, and with certain “resolution”.  It is amazing that we were able to create the semantics which allows us to predict the events that do happen “randomly”, like radioactive decay for example. Of course, not the course of every single one of them can be described—they are truly random in such sense—but their general, statistical behavior is quite well established, enough for building reliant nuclear reactors or medical imaging devices.

Such is our advancement in this kind of observation that it allows us to build tools like GPS which take into account the space-time curvature, and relativistic lengthening of time, or tackle the idea of quantum computing. We are so certain of the laws that govern the universe, that in our arrogance we create amazing things that rely on these laws to function properly. And they do work. There are laws in the universe that we can all rely upon.

These laws do not care who you are, where you live, whether you are a human-being, an amoeba, or a piece of anti-matter. They do not care about your life story. They are identical for everyone and everything, the true example of uber-egalitarianism. As such, these laws are the reality: the objective reality, ever-present for everyone and everything in the same way. And science is the process that seeks to discover and describe these laws. Therefore to dismiss it as “yet another storytelling device” is a mark of ignorance or a sad lack of understanding of what this process entails.

To be fair, science is messy, and mathematics is an abstract language that—sadly—not very many people know or even want to learn. Science does involve at first noticing some observations (which are objective, repeatable facts), and then trying to come up with some kind of abstract description that will allow prediction of future behavior of the observed system. This part happens in an objective reality. The problems might start when one has to translate or interpret the findings.

All our “natural” languages are inherently imprecise, and are a product of our daily experience and the environment that we live in. We have problems translating ideas that fall out of such experience, because we lack proper frame of reference to convey the true meaning of elaborate equations—equations that are very precise, and leave little room for debate. We have to resort to metaphors to describe constructs like electron cloud, which are described without fault by mathematical equations, but can never be properly described with the use of common words. Such attempts of passing the knowledge to “uninitiated” are prone to misunderstandings, and superficiality, as we can witness for example with the idea of an “observer” in quantum mechanics. For some reason people started associating the act of localizing an electron (measurement) with the presence of some kind of consciousness observing the act, sparkling a lot of shallow, misguided philosophical speculations. This is the limit of metaphors, and there are things lost or added in translation. Such is the nature of telling stories—their authors have almost no influence on how they will be understood.

In the end science does indeed tell stories that are supposed to help us make sense of this world that we live in. The difference is that the stories being told are based on the most fundamental aspects of repetitive, and reliable objective reality, a translation from very precise language of mathematics into our limited, poetic language of everyday life. These stories are not made up. They might be better or worse translations, but what they describe is real.

Some people seem to be offended by the word “objective”, and prefer using “consensus” instead. I think it is a misnomer. The laws do not care about our consent. Even though there exists a substantial subjective element to how things are explained, interpreted and understood, the facts, and the laws themselves remain reliably unchanged. There is an objective reality out there, and we’re relying on it in all our activities every day, especially now, when you are reading these words. :)

Thanks to these laws, we are alive, and can go on telling our stories in a manner that is most convenient for us 😀

Tactile input is important

Recent (?) fascination with touch-controlled interfaces is perhaps good for their development, but in my opinion they are not necessarily the future of device manipulation.

One of the big mixed blessings is that you have to rely on the visual feedback to operate such an interface. While it is perhaps a tad faster to manipulate directly items that you’d always want to look at—like photos—or that wide-sweeping gestures are faster than looking for “next” or “previous” buttons, it is not necessarily so with interfaces that rely on mixed tactile/visual, and sometimes even auditory feedback.

An excellent example is a keyboard. A keyboard gives you at least three kinds of input: tactile (feeling of pressing the key and its shape), auditory (click of pressing the key), and finally visual – letters appear (or not :) ) on the screen. Many people do not appreciate the first two, mostly because they were not trained in typing without looking at the keyboard to find each letter that they type, or to use all their fingers while typing. Personally I believe that classes of quick typing should be obligatory in primary or high schools and would be more useful in daily life. For example, when visiting a doctor, I often find that he takes more time to use his computer to type the diagnosis with two fingers than to actually examine his patient. What a terrible waste of time.

Anyway, the reason that mixed input is important is about efficiency. Once you learn to stop looking at the keyboard while you type, you reach a new level of efficiency. You start relying on your tactile and auditory input to feel and hear if you have pressed your key, and to an extent to know which key you pressed, only using visual feedback for confirmation, not estimation. For those who wonder why there are small embossed dashes on the F and J keys – they are places where your index fingers will be when you use proper typing technique.

Touch screen does not give you this advantage. You use visual cues to find the proper key, robbing yourself of the input by covering the key you want to press at the same time, and then for verification. You use a single channel to process the information. It is slower not only because tactile information reaches your brain and is processed 10 times faster, but also because you use serial processing instead of parallel one. While typing on a classical keyboard I know I have pressed a wrong or a good key even before I get the confirmation on the screen. Therefore it is much easier for me to switch from typing to correcting mode (and yes, there is a noticeable switch going on), than I would when I am typing on a touch-screen keyboard. My impression also is that the responsiveness and robustness of touch screen interfaces is still not at the level of keyboards, but I might be wrong, since this field evolves very quickly.

Another example where tactile input is vital, are the devices which one would be able to operate without having to look at them. One that comes to my mind is an mp3 player. Usually this device sits in my pocket or in a place that I do not have an easy visual access to, and for a good reason. Therefore if I want to increase the volume, lock the controls or change/rewind the track, I would prefer not to need to put the device in the center of my visual attention. Running, cycling, driving — these are the activities that do not lend themselves well to visual distractions. Admittedly, using any device while driving will lessen one’s concentration and might result in an accident, but this is precisely why most of the car interiors are built in such a way that you can rely on tactile input to turn on radio, heating, conditioner and all else.

Therefore, it makes little sense to design an mp3 player with touch screen input. When the buttons are present, you can learn their layout, and operate the device without the need to look at it. You will get immediate auditory input — increase in volume, next track will start playing etc. And you can easily lock/unlock controls, which perhaps is the biggest advantage of all.

There is also another issue. While using touch-screen to manipulate photos you often cover the part that you’re interested in manipulating, therefore robbing yourself of a visual feedback that the touch-screen is supposed to give you. This is not necessarily an optimal way to work. I would agree that it is the way that we manually paint or write, but it only shows the limitation of our tools (limbs). Personally, when faced with a choice of a touch-screen tablet, and a standard screen with a cursor, and a traditional tablet, I prefer the latter, simply because my field of view is wider. Motion estimation is similar in both cases, even if the second way takes more time to learn, and to get used to, like learning to use any tool or device.

All these examples show that if touch-screen interfaces want to become more useful, they will have to evolve additional feedback mechanisms. As of now, there are too many applications where they are detrimental to efficiency, and when we consider them setting their “coolness” factor aside, their application is still limited in scope.