Thursday, 2 March 2023

ChatGPT and Higher Education

NightCafé Studio 02/03/2023
As the arrival of ChatGPT has opened up new possibilities for students to complete written assignments, universities must carefully consider how to approach this new tool in their policies. This careful consideration is even more important for universities where the students are not native speakers of English, where linguistic proficiency is part of the evaluation of student performance. There are several options available, each with its own advantages and drawbacks, but my preference is some forward-looking option.

One option is to ban the use of ChatGPT and to implement severe punishments for any students caught using it. This a choice made by a few educational institutions worldwide, and the advantage is that this is clear and simple. However, this approach may not be practical, as it may be difficult to monitor the use of this tool effectively, even if there have been tools developed to spot the use of ChatGPT. Moreover, some students may still find ways to use ChatGPT discreetly, making it difficult to enforce such a policy effectively.

NightCafé Studio 02/03/2023

Another option is to do nothing and to treat ChatGPT as a tool similar to spell, grammar, and style checkers. If students can use the latter tools, why not let them deploy a somewhat more advanced tool as well. While this approach may seem reasonable at first glance, it raises several issues. For one, it makes it challenging to evaluate the student's actual writing ability, especially in the case of non-native speakers of English. Moreover, ChatGPT's ability to produce perfectly written papers may be a threat to academic integrity, as it could be difficult to differentiate between work produced by students and that produced by AI. Thus, when it comes to evaluation, it is difficult to tell who is evaluated.

NightCafé Studio 02/03/2023
A third option is to abandon written assignments that are not done in class altogether. This is again a simple and effective choice. However, this approach may not be beneficial, as written assignments are often an essential part of many courses and can provide valuable opportunities for students to develop their writing skills, skills that are transferable skills, so can be used in every walk of life. And if the future of content creation is the use of applications similar to ChatGPT, then the educational programme enforces a methodology that is alien to real life.

A more forward-thinking approach may be to teach students how to use ChatGPT effectively, responsibly, and more importantly critically, recognizing that AI-generated texts may well be the way of the future. This approach may involve revising the evaluation of written assignments to take into account the use of AI tools, emphasizing the importance of critical thinking and the ability to integrate information from multiple sources in the writing process. It may also involve teaching students how to evaluate the reliability and accuracy of information generated by AI. And when the students’ linguistic abilities

NightCafé Studio 02/03/2023
are to be evaluated, the in-class methodology can be used. So teaching the responsible use of ChatGPT and revising the evaluation methodology go hand in hand.

Whatever the chosen approach, universities must also consider what to do with students who choose not to use ChatGPT. This is even more so, if students’ choice is motivated by the lack of technological resources. Instructors must make it clear that students have a choice in how they complete their assignments, but that they will be evaluated on the quality of their work, regardless of the tools they use. But again, the quality of the product may well depend on the students financial background or technological interests, so the avoidance of deepening the digital divide should be in the focus of our attention.

In conclusion, the arrival of ChatGPT presents both opportunities and challenges for universities in their approach to written assignments. While it may be tempting to ban the use of this tool, it is essential to recognize its potential benefits and to find ways to integrate it effectively into the learning process. This requires careful consideration of the implications of AI-generated texts and the development of policies that promote responsible use of this technology. What would be your choice?

 PS. I have given very detailed prompts and introduced quite a few sentences, clauses. So again, it is rather difficult to differentiate what is mine and what is exclusively that of ChatGPT.

Tuesday, 7 February 2023

Text Synthesis and the Beehive

In the previous blogposts with the help of the OpenAi chatbot I launched a series of posts casting light on how AI may/will change academic work. In this post the starting point is that the way we speak about AI will inevitably determine how we think about it, so it seems important to create a vocabulary which will enable a rational discourse on OpenAI chatbot. For this end I am going to focus on the vocabulary to represent machine and human text creation. As far as the method is concerned, this post is the result of composing the responses of the OpenAI chatbot to a variety of questions into a coherent post. Again colours will distinguish between the voice of the bot and mine. Let’s get down to details then.

NightCafe Studio 07/02/2023

There are several terms and metaphors that can be used to describe the process of text creation by AI. One common term is "text generation," as it accurately describes what AI is doing: generating text. Some people also describe AI text generation as "text composition," which emphasizes the idea that AI is composing text from various elements in a structured way, much like a composer composes music. Another term is "text synthesis," which emphasizes the idea that AI is synthesizing text from various sources and patterns that it has learned. In this post the focus will be on “text synthesis” and a metaphor related to it.

The term "text synthesis" refers to the process of generating a new text by combining and transforming existing texts. In the case of AI, text synthesis is the process of using algorithms and models to generate coherent and meaningful text based on patterns and structures learned from training data, which can include a variety of written content such as news articles, books, websites, and more. The AI system then uses this training data to learn patterns and relationships between words, phrases, and sentences, and can use this knowledge to generate a new text.

NightCafe Studion 07/02/2023
Two significant aspects of text synthesis may illuminate the process. One key aspect is that text synthesis is a probabilistic process. This means that the AI system generates a text based on the probability of certain words, phrases, and sentences appearing together, rather than by following a strict set of rules. Another important aspect of text synthesis in AI is that it can be controlled and fine-tuned through various means, such as adjusting the amount of randomness in the output or controlling the length of the generated text. This can help to ensure that the generated text is coherent, relevant, and meets certain quality criteria.

To further explore the idea of text synthesis I will use an analogy from the animal world, namely that of a beehive. Just as a beehive is a collective,”  networked “entity made up of individual bees working together, AI text synthesis is a process in which individual pieces of information are combined and transformed into a cohesive whole. Just as bees gather nectar and pollen from flowers, AI text generators gather information from a wide range of sources. This information is then processed and transformed into a numerical representation that the AI can use to generate responses. Just as a beehive is able to produce honey through the collective efforts of its individual bees, AI text synthesis is able to produce coherent and informative text through the collective efforts of the information it has gathered. And just as the honey produced by a beehive is a unique and original product that reflects the collective efforts of the bees, the text produced by AI text synthesis is a unique and original product that reflects the collective efforts of the information that was used to train the AI.

The analogy with the beehive points towards the notion of the network, as the activity of the bees is a networked activity. Text synthesis is thus similar to working with a network, a network that is made up of nodes and edges. So how does text synthesis work if described as working with edges and nodes?

NightCafe Studio 07/02/2023

Think of each word or phrase in a text as a node in a network, and the connections between these nodes as edges. In text generation, the AI's algorithms use these nodes and edges to generate a new, coherent text. The nodes in the network can represent different types of information, such as parts of speech, concepts, emotions, and more. The edges between the nodes can represent relationships between these elements, such as associations, dependencies, and similarities.

When the AI generates text, it uses this network to determine which words or phrases should be included in the text, and in what order they should appear. It can generate text by connecting existing nodes in new ways, or by creating new nodes and edges to represent new information. For example, if the AI is generating a story, it might start with a node representing a protagonist, and then use edges to connect this node to other nodes representing events, characters, and locations in the story. As the AI generates new text, it can continue to build upon this network, adding new nodes and edges to create a complete, coherent story. In this way, text generation by AI can be thought of as a networked activity, as the AI uses nodes and edges to connect and build upon existing information in order to generate new text.

In conclusion, after having seen that AI or more precisely Machine Learning instead of writing, creating texts only synthesises other texts into a new one, two considerations may follow. One that the concept of “new” is to be elaborated on. To what extent can we talk about a new and genuine text if it is a synthesis of relevant texts? Does ML processes echo the ideas of texts that it has been trained on? Two, and this is a little unsettling, I created this text by synthesizing the responses that the OpenAI chatbot provided in reaction to my inquiries and prompts. This then complicates rather than simplifies the discourse on text synthesis as a distinction between human and machine text creation, doesn’t it?

Saturday, 28 January 2023

Creative process: Machine learning vs Human

 Artificial intelligence (AI) has come a long way in recent years, and one area in which it has made significant strides is in the realm of image generation. AI image generators like DALL-E, Stable Difusion are able to create images from text prompts, and the results can be quite impressive. Although both machine learning and artists rely on images made before them, it's important to note that the creative process behind images constructed by image generators is quite different from that of an artist.

NighCafe Studio (25/01/2023)
Artists often look to the work of other artists for inspiration. They might study the techniques and styles of their predecessors and contemporaries, and use this knowledge to inform their own creative process. Artists might also look to the natural world, as well as their own experiences and emotions, for inspiration. The goal is to find new ways to express themselves and create something that is unique and original in light of the traditions.

In contrast, AI image generators like DALL-E use a different approach. They are trained on a vast dataset of images, and they use this data to learn patterns and relationships between different elements. When presented with a text prompt, the AI uses these patterns to create an image that best matches the prompt. The AI does not have the ability to look at an image and find inspiration in the way an artist would. Instead, it finds common denominators based on the text prompt and creates an image accordingly.

This difference in approach is reflected in the images produced by the two methods. Images generated by AI tend to be highly detailed, but they can also be somewhat formulaic. They often lack the sense of spontaneity and individuality that is often present in the work of an artist. In contrast, images created by an artist tend to be more expressive and unique, reflecting the artist's personal vision and creative process.
NightCafe Studio (25/01/2023)

It's worth noting that AI image generators can be a useful tool for artists, and can be used to create images that might not be possible with traditional techniques. For example, an artist might use AI to generate a complex pattern that they can then incorporate into their own work. Additionally, AI can be used to create variations on a theme, which can be a useful starting point for an artist to create their own unique work.

NightCafe Studio (25/01/2023)

In conclusion, while AI image generators like DALL-E can create impressive images, the creative process behind them is quite different from that of an artist “even if both processes rely on previous traditions”. Artists look for inspiration in the work of other artists, the natural world and their own experiences, and emotions, whereas AI image generators rely on patterns learned from a vast dataset of images. The resulting images are highly detailed and formulaic, lacking the sense of spontaneity and individuality that is often present in the work of an artist. However, AI can be a useful tool for artists to generate variations on a theme and create images that might not be possible with traditional techniques.

PS Similar to the previous post, I kept the words I wrote and those of the OpenAI chatbot separate when I was writing this one. As it is visible, in this post, my words are almost insignificantly few. This is true, but not because the chatbot has evolved between posts so quickly that it no longer needs my input. This post differs from the previous one in that I provided a more detailed prompt this time, outlining the genre, the required length, the claim, as well as the supporting evidence and applications. This raises even another issue, namely how to conceptually and visually separate the various voices and determine what is my work and what is that of OpenAIchat. It is very likely that in the forthcoming posts I will explore this problem in more detail.

Thursday, 19 January 2023

ChatOpenAI, Copyright, Editorial work

In the previous post I used quotation marks to indicate the text has been generated by chat OpenAI. This practice seems to be somewhat disorienting, so I am going to use different colours for the two voices: my words will be in the standard green of this blog and those of chat OpenAI in purple. This distinction in voices is particularly crucial here, as in this blog post I am going to start meditating about copyright issues.

Nightcafe (19/01/2023)
When generating the text below, first I asked questions from OpenAI, requested clarifications, further explanations. Then I put the responses into a prompt before requesting OpenAI to create a blog post. The outcome wasn't terrible, but it wasn't perfect either, so I made OpenAI rewrite the post with a little shift in emphasis. Although the outcome was already better, there was still need for improvement, so I copy-pasted into it an answer to a previous question, rearranged the order of the paragraphs, united and divided paragraphs when necessary, and deleted sentences. This is what I do as an editor to journals, too, so I treated the texts as one written by a human being. And below is the result.

With AI, it is possible to generate high-quality content quickly and efficiently, making it an attractive tool for businesses and individuals alike. However, there are some concerns about copyright when it comes to AI-generated text. In this blog post, I will explore the importance of properly acknowledging sources and avoiding plagiarism when using AI for text generation.

First and foremost, it is important to ensure that the sources that the AI is using are properly acknowledged and cited. This means that any content that is generated by the AI should include proper attribution to the sources that were used. This can typically be done by including in-text citations and a list of references at the end of the generated content, just as you would with any other type of written work. It is also important to ensure that you have the necessary permissions to use the sources in your content, and to give credit to the original authors in a way that is consistent with copyright laws and best practices. This is important for two reasons: first, it ensures that the content is not plagiarized, and second, it allows readers to easily verify the accuracy and reliability of the information presented.

When, however, it comes to training the OpenAI model, it draws on patterns and structures that it learned from vast amounts of text data that were fed into it during its training process. This data can include a wide range of text, such as news articles, books, websites, and other types of written content. The model processes this data and transforms it into a numerical representation that it can use to generate responses. However, it's important to remember that because the AI model does not have access to the specific texts that were used to train it, it is not possible for the model to identify specific texts or sources that were used in its training process. That's why it is a good idea to perform a thorough check of your content before publishing it to ensure that it does not contain any potentially infringing material. This can include running your content through a plagiarism checker or reviewing it manually to ensure that all sources are properly cited and acknowledged.

Nightcafe (19/01/2023)

In addition to acknowledging the sources that the AI uses, it is also important to acknowledge the work of the AI itself. This can be done by including a disclaimer or statement in the content that is generated, indicating that it was produced using AI. This ensures that the work of the AI is recognized and credited, and helps to avoid any potential confusion about the authorship of the content.

In conclusion, AI can be a powerful and efficient tool for text generation, but it is important to navigate copyright issues in a responsible and ethical manner. This includes properly acknowledging the sources that the AI uses, acknowledging the work of the AI itself and avoiding plagiarism. By following these best practices, we can ensure that the benefits of AI are realized while also respecting the rights of others.

It should also be emphasized though that it is rather challenging to accurately credit OpenAI’s efforts because I was unable to colour-code the editorial efforts and prompts that went into the creation of this particular text. Therefore, in terms of execution of the task and drawing of conclusions, this article may be considered a failure in and of itself. Even if this is the case, I hope it has at least provided some food for thought. If you have suggestions for how to properly attribute OpenAI’s work, please leave them in the comment area below. You may also do this if you think there is something in the text that needs more explanation.

Saturday, 7 January 2023

Artificial Intelligence and Academic Work – introductory thoughts

 “As someone who is interested in the intersection of artificial intelligence and academia, I have been closely following the development of AI text and image generation tools and the ways in which they are beginning to influence the way we work and create” in academia. And also as 2023  will be clearly about AI. “In this series of blog posts, I want to delve into the potential impacts these tools may have on academia, specifically in terms of academic work and workflow.” What is going to be, however, more interesting is that I am going to give prompts to these tools about what they should write and what images they should generate, and this way we’ll also see how effective the tools are. This post has been written mostly by OpenAI chatGPT, and the images have been generated exclusively by NighCafe Studio. To distinguish the different voices, I'll put the words that Open AI chatGPT generated between quotation marks and my own lines without them.

NightCafe Studio (07/01/2023)

“What are AI text and image generation tools?

AI text generation refers to the use of artificial intelligence to produce written content. These tools utilize machine learning algorithms to analyse and understand language patterns, allowing them to generate coherent and often convincing text on a variety of topics. This can be something as simple as a social media post or as complex as a research paper.

AI image generation involves the use of AI to create original images or manipulate existing ones. This can range from simple tasks like adding a watermark to a photo to more complex tasks like generating entirely new images based on a set of specified characteristics.

How are these tools being used in academia?

One area where I have seen AI text generation being utilized in academia is through the use of automated abstract and summary writing “and also in higher education potentially by students to generate home assignments. These tools can analyse a research paper and produce a concise summary of its key points, saving researchers time and effort in the process.” Furthermore, as I teach at a university in Hungary, I know students will start using it for their written assignments. Instead of being afraid of this, I will teach them how to use it, and show that even to be able to appreciate it we have to learn much together. As I teach essay writing, I will introduce this to my students. Also I will thematise its use, its fair use.

“AI image generation is also being used in academia, particularly in fields such as biology and medicine. For example, AI can be used to create realistic 3D models of biological structures, which can be useful for education and research purposes.” Also for the sake of creating illustrations, one may well use these tools. And also, I will use the images as a model to chart cultural, visual biases, as the images are related closely to the corpus that machine has been trained on. For example, it is interesting to explore the image of this post: the colour scheme, the human hand (why a man and not a woman – the prompt was gender neutral), why a wooden desk, no ring on the fingers, why a pullover and not a shirt, why a laptop and not a PC, the perspective?

“What are the potential impacts of these tools on academic work and workflow?

As someone who has seen first hand the demands placed on academics to publish research and teach, I can see the potential for AI text and image generation tools to increase efficiency and productivity. By automating certain tasks, researchers can save time and focus on more important and complex aspects of their work.

Another potential impact is the democratization of information. AI text generation tools have the potential to make research more accessible by generating summaries and abstracts that can be easily understood by a wider audience. Similarly, AI image generation tools can help to make scientific concepts more visual and easier to understand for a wider audience.

However, it's important to note that these tools also have the potential to be misused. For example, there have been instances of AI-generated text being used to spread misinformation or propaganda.” And it is also possible that papers, assignments will be written with the help of AI. “In order to mitigate these risks, it will be important for researchers and university educators to establish guidelines for the ethical use of these tools.”


As AI text and image generation tools continue to develop and improve, it's crucial that we consider their potential impacts on academia. While these tools have the potential to increase efficiency and productivity, as well as democratize information, it's important to also consider the potential risks and establish guidelines for their ethical use. In this series of blog posts, I plan to explore these topics in greater depth and examine the ways in which AI text and image generation tools are shaping the future of academia.” If interested in these cooperative meditations, please, read and maybe comment on the posts!

Wednesday, 4 December 2019

Opening Speech #V4Shakespeare

I spent Monday and Tuesday (2-3 December) at a conference we organized at Pázmány Péter Catholic University. These were two awe-inspiring days, when we listened to each others' project descriptions and aimed at finding the common denominators. The theme was "Shakespeare in Central Europe after 1989: Common heritage, national identity," a topic that is both rather timely, interesting and also fascinating for people from the Post-Communist countries of the region.

Instead of providing a summary of what happened, I thought that my opening speech may capture the essence, the objective and atmosphere of the conference. So here it is!

And therefore as a stranger give it welcome.
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy. (Hamlet, 1.5.186-8)

As at this conference and in the project we are dealing with Shakespeare’s theatrical reception, and from the theatre, we may well learn that words can be adapted to the present, let me adapt Hamlet’s words to the moment.

We, 14-15 participants in the project used to be strangers until Dr Jana Wild, Jana has welcomed us, brought us together, initiated this project. We should be and are grateful to her.

Our relationship with the Visegrád Fund was that of a stranger as well. The Visegrád Fund did not know us before, and yet they welcomed us, the reviewers found value in our project, the administrative staff helped us through the ups and downs of the application process. We, thus, are grateful to them as well.

And we as a research team were strangers to Pázmány Péter Catholic University, too. And yet the leaders of the university, the faculty and institute welcomed us. And also it was an angel, i.e. Zsuzsanna Angyal who was the first to welcome us, and who supported our endeavours in a million ways. Furthermore, neither the project nor the present conference could come into being without my dear colleagues, Dr Kinga Földváry, and Dr Gabriella Reuss, dear, dear Gabi and Kinga thank you so much. So for this welcome, our gratitude goes to them.

And though Dr. Wild welcomed us, and we have worked so far as a team, we are still strangers to each other. We, the 15 participants, from 4+2 countries (Slovakia, Hungary, Czech Republic, Poland + Romania, UK) and 11 universities have been socialised in different educational, cultural, historical contexts, national identities. So welcoming each other in Hamlet’s manner is still a task ahead of us. We should and I am positive we are going to demonstrate that in spite of and with our differences we will form a cooperative, supportive scholarly team that aims at unravelling the mysteries of that aspect of the universe that we have chosen and by our education and interest are determined to.

God may help us in our endeavours!

Thursday, 19 April 2018

Talk to the Scholar (Book)

I have worked on more than fascinating projects this term (besides teaching and administrative duties), all of which may deserve a different post. We worked pretty much with more down than ups on re-establishing Digital Humanities MA programmes in Hungary. At the moment though I do not have a clue about the outcome of these efforts, the documents are in the ministry to be decided upon. I am working on the Hungarian Shakespeare Archive which fills me with joy, though sometimes I am not sure whether the time and energy I invest into this project are useful to anybody. I worked on the board of two Digital Humanities journals, a Hungarian one (Digitális Bölcsészet), and the other more international (Digital Scholar) and reviewed articles for both. Also, I had the opportunity to take part, teach four classes at the Text Analysis Across Disciplines Boot Camp at CEU. For the sake of advertising Digital Humanities in Hungary, I wrote up a longish Wikipedia entry about Digital Humanities. Furthermore, I am working on an online course focusing on Digital cultural memory to be finished by September the latest.

All these are projects that I just enjoy immensely, but all these would like to line up into a more ambitious project, i.e. improving academic life. This terribly, horribly, frighteningly ambitious project consists at the moment in two distinct subprojects. One of them is automating everything that is possible in a literary scholar's job, while the other is understanding and thus making education at an English department more meaningful. To put these more bluntly, I am lazy enough to let the machine do what it is better at than me, and easing my job in a way that I can tell my students why it is beneficial for them to attend my (or for that matter anybody's) classes.

When daydreaming about this ambitious project, I keep looking at the world what other people, teams are doing in this area. Keeping an eye on these is pretty easy with Twitter and RSS feed. The next book on my reading list, for example, is the most inspiring Cathy N. Davidson's new book The New Education: How to Revolutionize the University to Prepare Students for a World in Flux (New York: Basic Books, 2017), which I came across via my Feedly. The other finding is the Talk to Books project announced 5 days ago on the Google research blog (THX, Feedly again). And it is this project that I would like to write about now, as it nicely fits the scholarly aspect of the dream project.

Talk to Books may well give a hand in research if it manages to improve diligently, and there seems to be every chance for this. Talk to Books is a project within Google Books and it promises a semantic search engine. What Talk to Books does is rather fancy: You ask a question, the machine makes sense of the question and searches 100.000 books at the moment and tries to answer the question by leading the researcher to books wherein the answer lies, and highlights the sentence in the book which seems to answer the question. This seems to be similar to WolframAlpha insomuch as semantic search is concerned, and similar to Understanding Shakespeare, the Folger Shakespeare Library and JSTOR cooperation for both are to help scholars with gathering secondary sources. What differentiates the Talk to Books project from  WolframAlpha is that the latter provides information, while the former provides information that is documented. And Talk to Books is more sophisticated than the Folger and JSTOR collaboration to the extent that there is an element of a communicative situation in it. Of course, the communicative situation is in a way a fake one, the machine does not understand the question as a human being would, and the answers are sometimes completely off the track, but the method of faking communication works pretty well.

The model underlying Talk to books relies on Word vectors, a statistically trained model of relating meaning to strings of letters by analysing the context in which words occur. The model, in this case, is trained on the contexts provided by the natural language used and includes a highly complicated set of testing, curation of verbal contexts, filtering mistaken contexts (noise) and reducing the examples to relevant verbal contexts. The sets of code that is under the hood of Talk to Books is Google's machine learning toolkit, Tensorflow. More about this can be found at TensorFlow tutorials.

Now tasting is the ultimate test of this pudding, so let us see how Talk to Books works. I am writing a paper about spectatorship, so it might be interesting to check if Talk to Books may come in handy here. After some trials and specifications -- the user should also adapt to the abilities of the machine -- I came up with this question "what is spectatorship in a theatre?" After hitting the search icon approximately 15 books and quotations from these books and links to these books in Google Books showed up on the screen. Out of these books, seven were closely related to theatre studies and I found rather relevant quotations. Three of the books centred on spectatorship in the cinema, which is understandable as spectatorship studies are closely linked to the movie, and even these books referred to directly or indirectly to the theatre, so these would not be irrelevant either. The rest of the books referred to spectatorship in divers contexts, such as social research, folklore studies, discourse analysis, rhetoric.

The results of this simple search are telling on three accounts. First, the results seem to be rather relevant, so the word vector technology lying at the backend of deep learning technologies in general, and Tensorlfow, in particular, seems to be promising. Second, even the irrelevant hits may well prove beneficial, because they help one look out of the box, bump into scholarly findings semantically but not discipline-wise related to one's research. Thus, if I intend to be really generous, I should admit that the search engine facilitates interdisciplinary studies as well. Third, Talk to Books may well ease and help the scholar's tasks: it is easy to copy and paste the relevant quotation, one can check the context of the quotation via the link to the entire book, or rather to a page in Google Books, get hold of the bibliographical data of the book.

Although Talk to Books is promising I can see room for development in three steps. Talk to Books as it is now, does not have a specific target audience. Judging by these first impressions it seems to me that the target audience is the educated, English-speaking community of intellectuals. This wide user set is understandable from the perspective of the developers, since they need statistically relevant results to test the application. From the scholarly user's point of view though, the target audience should be the scholarly community, and thus the linguistic behaviour of the scholarly community should be more relevant for the textual corpus used for training Tensorflow. Second, again from the scholarly community's perspective harvesting metadata is still more laborious at the moment than it could be. If one intends to use, say, Zotero, one has to click many times, i.e. one has to go to the page in Google Books, find the "information about this book" link, search for the ISBN number, paste it into Zotero. Instead of these numerous clicks, one click would be better... Third, some filtering methodology would help the scholarly user on the one hand and a wider corpus including journals would come in handy, provided the application is to serve scholars. OK, I understand that Talk to BOOKS is about books, but scholars use journals as often as edited volumes or monographs.

In conclusion, I am just overwhelmed by the Talk to Books project. I am overwhelmed, because I can see my dream project, i.e. automating whatever can be automated in scholarly work, come true with this project, or at least one significant aspect of this. I am overwhelmed because I find in this project a promising use of deep learning technologies in ways that are already beneficial. And whatever misgivings I have concerning Google, there are amazing people in their ranks who can and do shape our digital futures.