This idea is about providing people with a calendar which utilizes blockchain technology, while taking the most convenient format. The calendar would be tied to blockchain technology ultimately, and to the history of real-life events intrinsically. It would be a tool for organizing and keeping a record of as many as possible actions and events (as a distant vision, all of them). It is envisioned to be a single unified calendar-platform “manifested” in a network-application.
The creation of an entry would only be possible into the future (no matter how near or far), in accordance with blockchain, where the past is highly unalterable. Cancelling would be an option too, but once entered, an entry can never be completely erased. And although a user may have removed an event from their calendar, the system would store the change and would see plans form.
There could be further features distinguishing the Blockchain Calendar from other mainstream calendar applications, making it more advanced and more convenient: it could be collapsible, meaning that the months, weeks, days or hours where nothing is planned in yet would only occupy a smaller space in the timeline view (or even other views) of the calendar, maybe just one unit (of rows or columns, depending on the view mode).
Furthermore, for future plans and tasks, for which the user has no punctual (in terms of hours) time or deadline, an approximation could be given. Thus there would not only be the units of days, but weeks and months, even years or decades — especially concerning the distant past — could be displayed as units containing entries. These units could have distinguishing colors — the longer a unit, the darker or lighter, for example. There could be specific group features for the calendar accounts of companies, political parties, international organizations, etc.
Thinking even further, smart contracts could be combined with or incorporated into the Blockchain Calendar, and the plans could be a basis for programming smart devices and machines in the human environment, meaning that the calendar could also be connected to the Internet of Things.
There are many phrases in the blockchain terminology which include the word “proof”: proof of existence, proof of work, proof of integrity, proof of ownership, etc. In legal cases, the Blockchain Calendar could be the digital proof of the past. The past would already be fixed in the calendar, or could be extrapolated from the present easier than today.
With the Blockchain Calendar technology evolving, verification methods, such as connecting the calendar with a location tracker, could be improved. And because of the calendar being a convenient and efficient tool, it could be widespread, the higher participation making it become statistically more trustworthy.
The Blockchain Calendar technology and its philosophy enforce transparency and accountability, which are positive phenomena for the whole of society. This enforcement can ultimately lead to a “blockchain of good”, the chain of morally irreproachable actions and events in human lives.
(Written in 2020.)
All of the below ideas are intended to support the ease of use and/or the effectiveness of time management in and beyond Google Calendar for the user.
The schedule view is the first step toward having an overview of one’s calendar as a single timeline into the future. The next step would be to have specific view modes within the schedule view. Weeks, months, years, and maybe even decades should be collapsible and expandable one-by-one, in order to have a clear view of and focus on the selected time unit(s).
Apart from all-day events, users should have the option to add events or tasks to units of time as unspecific as a week, month, year, and even a decade. This could work with an optional(ly expandable/addable) „this day”, „this week”, „this month”, „this year” and „this decade” field to each according unit of time. Within the schedule view, these units could be displayed potentially with characteristic colours (darkening with the growths of the frames of time), to indicate their general nature.
The calendar could have Gantt Chart like elements, and users could add events or tasks for any duration, even without start or end dates. These events and tasks would probably also be most spectacularly visualised in the schedule view.
Beyond enabling users to add recurring events, the calendar should provide the option of having weekly, monthly and yearly recurring plans (packages of events and tasks), and options to apply them to multiple future units of time, and also one-by-one. The plans should me modifiable after application, altogether, or one-by-one.
The keyboard shortcuts Ctrl + c, Ctrl + x, and Ctrl + v should work for events and tasks as they do in many other applications.
Adding a blockchain layer to a calendar platform could transform it to the manifestation of the chain of good. In the public sphere, it could serve as the proof of integrity, or just as an objective proof of the past.
(A political critique of the social media model.)
In this article, neither specific corporations nor their CEOs will be accused of malevolence. The article concentrates on the faults in the universal model of social media.
The boundaries between social media and the real world have become fuzzy over the past decade, but taking a step back from social media, one arrives into an external perspective. Here it becomes possible to draw parallels between human layers of reality, and various functions of social media.
From a conscious human perspective, the innerest layer of reality are our thoughts. They are private, and include our doubts, desires and interests. On social media, these private doubts, desires and interests manifest in searches. Searches are theoretically private, but social media systems have access to them on individual, and/or on higher levels.
The next human level is our own perception of ourselves. In social media, activity logs are internal mirrors of a personality, encompassing both searches and communication with friends. We all have smaller or bigger differences in the person we perceive ourselves to be, and the person we display ourselves as, to our friends and bigger circles. Our external manifestation on social media is our profile page.
Moving from individual layers to interpersonal ones, the posts, comments, stories and reactions on social media can be depicted as parallels to our chosen words in the real world.
The outermost and highest layer of the human individual is their actions in the world. However, in social media, there are no parallels to actions. Self-expression in the form of posts, comments, stories and reactions has already been categorised on the previous layer. One could say that the news feed is like the world, but it is a world, where the user has the least say in what „happens” to them. The user mostly scrolls. Scrolling is the main activity of users on social media.
Beyond the reach of individuals, in the real world, there is their overall data. Various institutions have access to some societal data, but none in the detail, breadth, and with the tools of social media. Social media knows a significant amount about the users’ conscious and about their subconscious information, while there is stuff it is trying to sell to them, and while it can have a direct influence on their habits and limits, at least within the system. In the world of social media, the psychotherapist, the salesperson and the mayor are one, which sounds utterly dangerous.
The definition of political absolutism (according to the Encyclopaedia Britannica) is: “[…] the political doctrine and practice of unlimited centralized authority and absolute sovereignty […]”
Now, in social media, the problem is not just that there are no entities corresponding to multiple parties. Different social media platforms, originating in different countries, might be viewed as such. The more significant problem is that there is absolutely no separation of the branches of life. All layers and aspects of real-life and the world are squeezed onto one system. The system resembles, first of all, the market and the government in one entity, but the „healthcare” and „hospitality” of social media are in the same hands. It is the same system, through which your (real and/or stimulated) needs are being taken care of, and which facilitates your communication with your friends. If someone knows how this can be managed in an ethical way, their contribution would be bigger through expressing this knowledge in political philosophy, than through having built social media platforms.
If one recognizes the underlying dangers of the social media model, the following questions arise (with which the article will leave the reader):
Have social media firms fully recognized their latent power?
What should citizens be more worried about: Governments gaining more control over them through partnerships with social media, or social media gaining more control over the government through the citizens?
Why did we get here?
Can there be an ethical model of social media, and what would be its characteristics?
Should social media be part of life in the distant future?
Through paying with their time and information, the essences of their life, are users contributing to the creation of a better future?
Today (February 10, 2024) I had a conversation with a friend about the future possibility of having thoughts "scanned" and made accessible, in a transparent manner. Our arguments "wobbled" between the present state, i.e. not having our thoughts being read (or not yet, at least as far as we know) - which was represented and argued for by my friend - and the futuristic polar state of making all thoughts accessible - my preference.
I will share my arguments, although they must have appeared more extensively in the respective literature already. What may be unique about them here is their combination, that they are presented together. I am aware of the current ethical and legal strength of the counterarguments (represented by my friend, and not being discussed here), but I would like to draw the readers' attention to the fact that their judgement of arguments may be highly biased towards a seemingly stable state they're in, i.e. the present version.
The 3 arguments for not being (too) paranoid about thoughts being "read"/scanned, and for having all thoughts be made accessible, are as follows.
1. If someone is (remotely) "scanning" your thoughts at present, if such technologies are implementable already, chances are that you don't have any contact to those people, nor access to the (relevant part of the) related infrastructure anyway.
2. You shouldn't do or plan to do "bad stuff" (according to your own knowledge of the law and of morals) anyway.
3. If all thoughts were made accessible, what a (different) world that would be! Think of the personally clarifying power of revealing sensitive inner perspectives - temporarily probably unpleasant, but ultimately tranquilizing. Think also of the increase in opportunities for, and the acceleration of collaboration with each other and machines. Legally and politically speaking, this futuristic, transparent system of thought scanning and sharing (might be called the Internet of Minds) would boost the cultivation of our values and implementations in our value systems too.
Big Tech has a democracy problem. The values that Tech Giants impose on their systems and users are not democratically accepted or consensual ones. Despite the fact that I am on the same page with Google, for instance, when it comes to political values (a statement I dare to make based on all the year-in-search videos to date), I am worried about the mechanism of shaping users' views, on the longer term. Tech Giants are not political entities, but they have roles in politics, and the underlying mechanism of imposing values, norms, and views on systems and users lacks democracy substantially. I understand that without such mechanisms, online tools could be largely shaped by so-called trolls, awful people, and that it is difficult to filter out some terrible bots in the process too. If we judge a more democratic process, however, to be inappropriate in this situation, how could we justify that it is appropriate in society and politics in general? Where should the lines be drawn to solve this specific version of the paradox of tolerance (which is also a Russell's paradox, given that we categorise democracy under values of tolerance)? More selfishly and alternatively, how can we avoid the underlying process "biting back" in the future?
(March 27, 2024)
By “Inverse Turing Test”, the analogue of the classical Turing test is meant, with contemporary AI, not only as one of the respondents, but also in the role of the “interrogator”.
There are two possible and expectable, alternative outcomes of the test: Either it is the interrogator AI who succeeds, recognizing who the actual human in the test is – this also means the failure of the respondent AI; or: the interrogator AI fails to recognize the actual human, which means that the respondent AI succeeds in deceiving. If the outcome (either of the two) is clear and detectable, then by the design of this test, the AI will “beat itself”, in one way or another.
The Halting Problem in its simplified form is easily understandable watching this ca. 4 mins video. It is illustrated that there can be situations which, by logic, not even an “omnipotent” computer (like an artificial general or super-intelligence) could handle.
The Inverse Turing Test is “the Halting Problem of AI” in that it illustrates its limits through logic, through a theoretically straightforward proof.
Generalizing its findings and picturing the practical implications, however, may be devastating. In general, we are dealing with the deception of AI by AI, in the context of humanness. It is crucial which “role” an AI is better at in the Inverse Turing Test, in the case that both the interrogator AI and the respondent AI originate from the same program, that is, they have originally, prior to the test, been trained identically. In their respective roles, they will presumably start “diverging” from the start of the test. Drawing some cautionary clues, humans may want to make sure that AI is by default, by design, or by training better at recognition (human-AI distinction) than at deception. It could even be made a general security principle.
As foreshadowed above, consequences may be far-reaching and fatal, especially in terms of cyber frauds and military applications. Humankind’s hurtling into an AI deception (arms) race may need to be prevented. Establishing two classes (“decent” and “deceptive”) of AI – based on their performance in the Inverse Turing Test – would contribute to this process. Eradicating deceptive AI could be key in the prevention of AI from “destroying humankind”, later on. It might also be challenging though, because whilst that eradication is a collective interest, various nations and powers may want to apply deceptive AI in their military operations and wars with each other, potentially even leading to multi-player prisoner’s dilemma structured situations.
(August 2, 2024)