Share

Technology at the service of writing, here is the bestseller of the future

Scenarios and possible implications of storytelling. Everything suggests that soon it could be an algorithm writing the novels of the future

Technology at the service of writing, here is the bestseller of the future

We have dealt many times with the relationship between technology and writing as a technical act and with the relationship between technology and writing as a creative act. The latter is a much talked about topic in the debate on artificial intelligence and cognitive machines. Can an artificial brain produce a creative act, such as storytelling, in the same way as a biological brain processes it? Could artificial output be better than organic output in terms of content quality and narrative style?

If the creative act is a product of knowledge and experience, certainly yes; if instead the creative act is a product of something deeply rooted in individuality and personality, then it remains to be seen how far the revolution of cognitive machines will go. For now, Artificial Intelligence is Deep Learning, which in Italian could be rendered with the word "geek".

According to Google technologists, always very visionary, sooner than you think the translation performed by an artificial intelligence will be indistinguishable from that done by a human being. And we must be certain of this prediction given the level reached by Google Translate which is powered by artificial intelligence software.

For lovers of the sociology of writing and literature, we recommend reading a rather substantial book just released with Random House in English: The Written World: How Literature Shaped History by Martin Puchner, professor of literature at Harvard. The book, which outlines the millenary history of the influence of storytelling on human actions and its ubiquity in all civilizations, investigates the way in which new technologies have changed the experience of writing and what the effects have been and are of these changes on society and on the artistic expressions of the time. In his analysis Puchner builds a solid theory based on the observation that writing technologies are among the founding elements of major historical events.

Word processing

But let's go in order and let's deal now with the contribution of technology to the technical act of writing.

The very technical act of writing with a machine influences thoughts: "Our writing tools also work on our thoughts." To say it, or rather to type it, was the German philosopher Frederich Nietzsche who, due to vision problems, had decided to use a portable typewriter built in 1865 by the Danish inventor Rasmus Malling-Hansen and presented at the universal exposition of Paris in 1878. Nietzsche with his "writing ball" (Schreibkugel) composed about 60 manuscripts, before the device broke beyond repair during a trip to Genoa.

In more recent times, with the arrival of personal computers, it was another eccentric intellectual, like Nietzsche was, who captured the spirit of the time. In January 1983 "Playboy" published a story by Stephen King entitled Word processors. In the story, written on a Wang System 5 with a word processor called Model 3, a frustrated student discovers that by deleting sentences about his enemies, he physically wipes them off the face of the earth to take their place. As always, King's translation ability is astounding.

The writer from Maine captures the essence of software-assisted writing well in the program's ability to insert, move or remove words and portions of text without leaving a trace (only in the most modern versions of word processors can you leave a trace of the various editorial layers … to the delight of philologists).

Here, with word processing, the uniqueness of the action of writing finally takes place, with that of reading, correcting, expanding, removing, moving and cleaning up the work. In short, something happens that has a mainly quantitative value, that is, it concerns the productivity of the writer, but also, to a much more modest extent, a qualitative one, because it concerns the way in which thought crystallizes in the content, as Nietzsche had intuited with his primitive "writing ball".

It is precisely with the Personal Computer that video writing programs begin to enter the homes of writers and those who need to produce textual content for any destination. Matthew Kirschenbaum, who has written a book titled Track Changes: A Literary History of Word Processing (Harvard University Press, 368 pages), estimated that in 1984, half of American writers were using a word processor (Word Star or Word Perfect) to write. It seems that the first to deliver a manuscript stored on an 8-inch floppy disk was Frank Herbert, the author of Dune, at the end of the seventies. In researching him, Kirschenbaum discovered that science fiction writers were the first to embrace writing programs on the Personal Computer.

In fact, it was precisely the most prolific writers, as science fiction writers tend to be, who realized the advantage that a word processing system provided them. A hyperprolific writer like George Martin, the author of Games of Thrones, wrote his saga of immense success with the Word Star, the most popular word processor for Ms-Dos. About this program the imaginative writer expressed himself in these terms: "It was my secret weapon".

The word processor is today an irreplaceable partner of the writer if only for three fundamental functions: 1) the automatic spelling and syntax correction which helps the writer to eliminate typing errors or, more importantly, incorrect spelling, concordances of gender and number, as well as those between subject and verb, and repetitions which are among the most common errors; 2) the Thesaurus which helps to increase the lexicon and discover the most suitable words to describe a context thus choosing the correct register; 3) the choice of language for hyphenation and the correction of grammar and syntax errors, an absolutely indispensable tool for those who have to write a multilingual text.

The Macintosh typographic revolution and the birth of desktop typography

In 1984, the Macintosh introduced what first-generation word processors lacked: typography. Thanks to the 8 typographic fonts included in the Mac operating system, authors could give a typographic form to their documents. The following year, the combination of the Mac with the Page Maker desktop publishing program (developed by Aldus of Seattle) and the laser printer (the Apple LaserWriter) resulted in an affordable and easy-to-use combination for producing paginated documents with letterpress quality. This combination started a new phenomenon, desktop publishing. Desktop typography changed the very nature of the publishing industry, transferring more power to authors.

It is as if the writer and the printer have merged into one unicum, in such a way that the producer of the content is at the same time the creator of the graphic result of his work. It seems like a trivial matter, but it's not like that because this textual/visual fusion offers many interesting ideas for improving the appearance, appeal and use of the content. It improves something that has always been sought after by the writers most sensitive to the communicative aspect of their work, legibility.

With desktop publishing, word processors also began to introduce advanced formatting and page layout functions to indicate the style in which the writer-stylist wanted his text to appear to the reading public. Barbadian poet Kamau Brathwaite wrote that writing with the Mac "enabled him to write in the light." The lighting, indeed.

The word processor helps the writer enormously in editing and organizing creative material, but it is of little help in organizing, structuring and designing it, that is, building what it calls theoutlining. You can keep outlines of the content, but it is not possible to build a relational canvas. Here, for this purpose, specific software, precisely defined think-thanks, literally "thought collectors" come to the aid.

It was with the Macintosh, in 1987, that the first real think-tank with an inspired name arrived: HyperCard. Created by one of the greatest talents in software development, Bill Atkinson, HyperCard allowed users, through a very simple programming language called WildCard, to structure and relate the information collected in cards arranged in a stack. The writer could thus collect, describe and annotate his general thoughts, the specific events of the plot, the places of action, the characters and the timeline and relate them according to a certain narrative strategy.

The most amazing thing about HyperCard was its extraordinary ease of use and versatility. The information of a card could be modified, immediately reflecting on all the cards corresponding to or connected to that specific piece of information. We don't know where Dostoyevsky or Victor Hugo could have arrived, staging a delirium of characters, if they had had HyperCards available. Dostoyevsky, then, in his narrative fury made characters who had disappeared return, leaving the reader stunned. Perhaps with HyperCard he would have avoided these sudden resurrections, but perhaps also that inner narrative magma would no longer have the strength to suck the reader in like a cosmic event.

Then there is a whole family of software that allows you to build mind maps, i.e. a form of graphic representation of thought with a hierarchical or associative structure useful for giving substance to a creative project such as a narrative work could be. Anyone interested in this topic can start by reading and practicing with a book by Nina Amir by title, Creative Visualization for Writers. An Interactive Guide for Bringing Your Book Ideas and Your Writing Career to Life.

For screenwriters there are even more specific software that can perform typical screenwriting functions that standard word processors are not equipped to perform.

Natural Language Processing (NLP)

Summly can be equated to the dog Laika of space exploration. It is one of the first sensible attempts to have specialized software generate structured text Natural Language Processing. Summply is, in fact, an app for iOS developed by a fifteen year old from London, Nick D'Aloisio, of probable Italian origins. In his bedroom, the young Londoner has developed an algorithm with the ability to summarize articles of any length in 300/400 words to adjust them on an iPhone screen.

The young Londoner's app received impressive media coverage and in the end his creation made him a millionaire when Yahoo decided to acquire the project for 30 million dollars, renaming it Yahoo News Digest. Yahoo's app won theApple Design Awards at WWDC 2014 for its technological and design excellence. In fact, the app works well and does justice to the articles that it is responsible for summarizing in just 400 words. In the image above you can see with what property of language and content coherence he summarizes a BBC service relating to the L'Aquila earthquake.

An area, different from journalism, in which the technology of the Natural Language Processing it is the legal one. Legal work can already rely on commercial algorithms capable of scanning and analyzing a large number of documents to extract those relevant to the case in question. It is estimated that this technology will lead to a 13% reduction in the man-hours a law firm spends preparing a case (thus reducing costs for the firm and clients). As a result, McKinsey estimates that 23% of legal work could be automated in the not too distant future. The legal profession will therefore be able to concentrate its resources and energies, not so much on data mining, but on the highest level of the profession, that is, the development of defense or prosecution strategies.

Even the world of finance is deeply affected by the Natural Language Processing. Through the analysis of unstructured sources (such as posts from Facebook or other social media), NLP algorithms are able to extract predictive information on stock market trends that can guide investors' choices. The latter have agreed that this kind of collective wisdom is the best guide for operating the stock market. This tends to confirm what was Rockefeller's belief that it was the man in the elevator who had the best information on stocks. .

Story Generator Algorithms

In the field of writing we are increasingly talking about automated writing, robo-journalism, and machine writing. A phenomenon that is starting to take hold especially in specialized journalism such as financial journalism. It is a robo-journalism software that produces many of the 3700 Associated Press notes on the quarterly financial statements of listed companies in just a few minutes. Some of the pro-Trump and anti-Clinton lyrics posted by Russians on social media are also believed to have been packaged by an automatic writing algorithm and then viralized by BOTs.

Few have heard of National Novel Generation Month, but NaNoGeMo really writes the future. This eccentric initiative, related to the National Novel Writing Month literary competition, invites creatives and developers to spend the month of November writing the code capable of generating a novel of 50 words (about 120 printed pages). Once the novel is generated, it must be posted on GitHub, a resource to which 20 million developers subscribe. Darius Kazemi (developer and Internet artist from Portland), winner of the 2004 edition, said: “Storytelling is one of the great challenges of artificial intelligence. Companies and researchers are working to create algorithms that can generate stories that make sense, but many of these only generate short chunks of sensible text. Indeed, a very first look at the entries presented in the competition shows how true this statement is. So on the creative side let's forget it, on that of Deep Learning there is something more.

The logo of the literary competition with a maxim by Leonard Bernstein which says: "To accomplish great things you need two things: a plan, and not too much time". In fact, the participants in this competition must write a novel in 30 days.

The first and interesting application of Deep Learning algorithms could be to contribute to the drafting of the sequels of narrative series already full-bodied and structured such as The Throne of Swords o Harry Potter. We are talking here of thousands of pages on which to go to work with the algorithm. Characters, places, events, plots could be examined and stored by the algorithm to elaborate new possible narrative outlets or predict possible sequel scenarios.

Zack Thoutt, a developer from Boulder, Colorado, has created a neural algorithm to predict the sixth book in the saga of George RR Martin which will deliver the highly anticipated manuscript of Winds of winter only in 2019. The algorithm has already produced spoilers of the new Game of Thrones which Martin welcomed with his special self-irony.

Max Deutsch, a San Francisco technologist and blogger who founded startup Openmind, instructed a deep learning algorithm to learn the first four Harry Potter books, then asked it to produce a chapter about what it had learned from this deep reading. The chapter produced by the algorithm was published on Medium. Fun and readable too!

The fact is, however, that the algorithm-novelist, i.e. the Story Generator Algorithms, is still in its infancy and there is a long way to send writers like Martin or Rowling to the benches.

However, the Story Generator Algorithms is not a sterile project nor a very defounded one. If we go to the web page of the Interdisciplinary Center for Narratology of the University of Hamburg we can inform ourselves about the history of this technology and its development. We gladly refer to it anyone interested in learning more about this topic.

The Economist experiment

The Economist, in addition to being one of the world's most authoritative periodical publications and the largest independent liberal think-tank, can in some ways also be considered a humorous publication. Yes, because the very Brit-style humor is an integral part of its unmistakable narrative mix and also a fundamental requirement for entering the anonymous group of journalists who make up the periodical.

Well on the eve of Christmas the London magazine decided to do an experiment à la Turing to see if he had to leave one of the Science and Technology correspondents at home after the holidays. He entrusted the latter and a specialized algorithm with a 500-word scientific report. But let's get this story told by the Economist himself. It's only 3 minutes of reading. The title of the piece is How soon will computers replace The Economist's writers? We've got a few years left, at least. Thank god!

The cars are coming. A well-known 2013 study concludes that half of US jobs are at risk within a generation. Writers are not immune to this trend. Another AI study claims that computers will be able to do schoolwork by 2025 and produce short stories and novels by 2040.

In the spirit of going fast and breaking things, the Economist set up an AI algorithm to learn Science and Technology section articles in order to produce an "artificial" piece. The results shown below show the possibilities and limits of machine learning programs which in essence is today's Artificial Intelligence.

The computer has tried to imitate our style and identified the topics we cover most frequently. Although the sentences are grammatically correct, they lack meaning. For him and our relief, the Science and Technology section reporter will still find his desk when the Christmas holidays return. Here is the piece produced by our robot.

We offer it to you in English because to translate it would be to betray it. Indeed we offer you the Italian translation (under each paragraph) as developed by another artificial intelligence software.

A MUST of the world's largest computer scientists have shown that the cost of transporting the sound waves into the back of the sun is the best way to create a set of pictures of the sort that can be solved. It is also because the same film is a special prototype. A person with a stretch of a piece of software can be transmitted by a security process that can be added to a single bit of reading. The material is composed of a single pixel, which is possible and thus causes the laser to be started to convert the resulting steam to the surface of the battery capable of producing power from the air and then turning it into a low-cost display. The solution is to encode the special control of a chip to be found in a car.

A MUST of the world's leading computer scientists has proven that the cost of transporting sound waves to the back of the sun is the best way to create a series of images of the kind that can be resolved. It's also because the film itself is a special prototype. A person with a piece of software can be transmitted by a security process that can be added to a single read bit. The material is made from a single pixel, which is possible and then causes the laser to be fired to convert the resulting vapor on the surface of the battery that can produce energy from air and then turn it into a low-cost display. The solution is to encode the special control of a chip that is in a car.

The result is a shape of an alternative to electric cars, but the most famous problem is that the control system is then powered by a computer that is composed of a second part of the spectrum. The first solution is far from cheap. But if it is a bit like a solid sheet of contact with the spectrum, it can be read as the sound waves are available. The position of the system is made of a carbon containing a special component that can be used to connect the air to a conventional diesel engine.

The result is a form of an alternative to electric cars, but the more famous problem is that the control system is then powered by a computer which is made up of a second part of the spectrum. The first solution is far from cheap. But if it's kind of like a solid sheet of contact with the spectrum, it can be read as sound waves are available. The position of the system consists of a carbon containing a special component that can be used to connect air to a conventional diesel engine.

The problem with the approach is that it reaches the fuel by reflecting a fuel cell to an array of materials that are sensitive to the light that is composed of solar energy. In the meantime, the process can be made to act as a prototype of a superconducting machine. The technology is also a short-range process that is being developed for comparison by the magnetic fields of the solar system.

The problem with the approach is that it reaches the fuel by reflecting a fuel cell in an array of light-sensitive materials that are made up of solar energy. Meanwhile, the process can be made to serve as a prototype of a superconducting machine. The technology is also a short-range process being developed for comparison with the solar system's magnetic fields.

The result is a chemical called the carbon nanotube that is absorbed by the process of converting a solid oxide into a chemical that is specific to the cellular nerve. The stuff is able to extract energy from the image and then releases the electrons that can be detected by stimulating the image in the bloodstream. The surface temperature is not a molecule that is also being compared with the small energy of the structure of a metal. A single organ is a large amount of energy, which is particularly intense. The internal combustion chamber is thus able to produce a photon which is being developed to produce a second protein called the body-causing protein that has a complex and comparable process to stop the components of an antibiotic.

The result is a chemical called a carbon nanotube that is absorbed in the process of converting a solid oxide into a specific chemical in the nerve cell. The material is capable of extracting energy from the image and then releases the electrons which can be detected by stimulating the image into the bloodstream. The surface temperature is not a molecule which is also compared with the small energy of a metal's structure. A single organ is a large amount of energy, which is particularly intense. The internal combustion chamber is then able to produce a photon which is developed to produce a second protein called a protein which causes the body to have a comparable complex process to stop the components of an antibiotic.

Reading the piece is quite astounding. The arguments are there, the writing is passable, the information is correct but there is no general sense, the relationship between the paragraphs is not understood, there is no narrative development. As far as translation is concerned, let's forget it, but we know that Italian is not one of the languages ​​best served by Google Translate.

Marinetti would certainly have liked this casual collation of sentences with complete meaning, but without a logical thread. Even Beckett and Ionesco would have found it stimulating to build a dialogue of the absurd between two technological freaks.

 

comments