:: Article

Back To The Real AI

By Richard Marshall.

Hector J Levesque, Common Sense, The Turing Test, and the Quest for Real AI, MIT 2017

In a recent friendly dispute between philosophers of mind Dan Dennett and David Papineau, Papineau disputes Dennett’s denial that animals can actually think. Papineau writes to Dennett at one point the following: ‘Along with nearly all other philosophers and cognitive scientists, I take it that zebras harbour internal cerebral states that represent features of their environment, and which combine with other such states to guide zebra behaviour. Indeed recent work on “model-based learning” argues that such cognitive processing allows animals like zebras “to anticipate how certain means will achieve certain ends in advance of specific experience” – which you accept is the mark of understanding “in a strong sense”. All this seems to count against your insistence that zebras have nothing but “competence without comprehension”.

Papineau goes on to recommend a causal role for knowledge: ‘ If a zebra can represent the presence of a lion, and so work out how to respond appropriately, why isn’t that comprehension in a strong sense? Not only is there a reason for the zebra’s action – a lion is nearby – but the zebra knows the reason.’ Dennett denies this. Both these philosophers are intensely smart and knowledgeable. Both are materialists and think science must underpin any philosophical position they take. Dennett’s approach has been a kind of therapeutic one. He works to show us that many of our favoured views about the mind are held hostage by metaphors and images. He creates different metaphors and images to dislodge the old ones so that we can accept what he takes to be the way of understanding what science shows us. The trouble with his approach for some, including Papineau here, is that sometimes it appears that Dennett himself has bewitched himself with his own devices. It can sometimes seem as if Dennett is incapable of decoupling himself from his own ideas. He can write sometimes as if he thinks to disagree with him is to be unscientific. He can even write as if there couldn’t be an alternative. Dave Chalmers has answered back on the topic of consciousness, saying that dualism and pansychism, for example, are philosophical positions that may or may not be supported by science. And in computer science, there is an alternative to Dennett’s own approach to intelligence, his famous ‘intentional stance’ position.

Levesque’s overview of what he calls ‘Real AI’ – which is what AI was until it was largely dropped in the 1990’s – fleshes out an alternative vision to Dennett’s of how we might understand intelligence. Indeed, it seems that if the Real AI crew are right then Dennett’s ‘… behaviourist line that all animal intelligence resides in practical competence without mental comprehension’ isn’t ever going to be able to explain the evidence that animals draw on knowledge to think their way round their worlds. The ‘intentional stance’ from this perspective looks like an avoidance strategy, avoiding the scientific data that suggests that for at least some kinds of intelligence, intelligence really does have a causal role for knowledge.

Just as David Chalmers’ ‘hard problem of consciousness’ is advancing all sorts of ideas that have eventually to be tested scientifically, Levesque’s ‘Real AI’ seems to pose a similarly hard problem, this time, the ‘hard problem of intelligence’. What makes it so hard is that those working on it have no idea what symbols and symbolic structures are required for it to work. But like the ‘hard problem of consciousness’, ‘common sense’ thinking really does seem to exist and, for Levensque and his gang at least, no amount of alternative talking about ‘intentional stances’ is going to pass muster. The data for the kind of intelligence they’re interested in, ‘common sense’ intelligence is what he dubs it, produces data that doesn’t seem to be like weather, chemistry, physics, geology, astronomy or reflexes. Levesque’s takes the data at face value and wants to explain it. Whether that’s the right approach is anyone’s guess. As Levesque says, we’re miles away from knowing and we don’t seem to be very much interested in designing the kind of AI that would produce common sense intelligence. But one thing that emerges from the book is that there is an alternative research program to the one that dominates current AI.

[John McCarthy]

Levesque’s brisk, sharp and succinct book argues the case for returning to the original brief of AI research. This is the research program that he calls ‘Real AI’. Real AI seeks to implement ‘common sense’ intelligence. John McCarthy, one of the founding fathers of AI, understood this notion of ‘common sense’ as something that was essential to a fully realised human intelligence. An AI without it was, for him, at best only a partial AI. What did a Real AI program have to do not to be partial? McCarthy wrote that ‘… [w]e shall … say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.’ Real AI aimed to achieve this: ‘ … our ultimate objective is to make programs that learn from their experiences as effectively as humans do.’

Doing this proved a formidable task and one that, according to Levesque, has been abandoned for different and easier problems since 1990. It’s not that AI isn’t producing intelligent programs that can learn – we now have programs that can learn how to be masters of numerous games – chess and Go – at levels far beyond human ability, and self-driving cars learning how to drive, and math solving programs working beyond human cognition. But the tasks that current AI programs master require less than common sense. They can recognise patterns from vast amounts of big data they are fed, and it turns out that from just this, there are many intelligent things we can get the AI to achieve. But as Levesque says: ‘ … learning and recognizing cats by yourself is one thing; learning to read by yourself is quite another; and learning to read Wittgenstein yet another.’ The intelligence of current AI is a sort of super-intelligence bereft of common sense. Nothing wrong with that, except that it isn’t the whole story, and it misses an aspect of intelligence that animals, especially human linguistic animals, have.

The Turing Test, often cited as the holy grail for AI, is for Levesque the wrong test for this ‘common sense’ intelligence. The Imitation Game asks that humans be fooled by the behaviour of an AI. Turing devised the test as a response to his own thinking regarding mental terms. Mental terms are vague and imprecise, too imprecise to pin down and investigate through modelling. Rather than trying to get to grips with the problems mental terms presented he thought that he’d cut through that particular Gordian knot and look at the intelligent behaviour produced by the meind instead, on the principle that if something talks and behaves like it has a mind then most likely it has. Searle’s Chinese Room argument was a thought experiment designed to push back against this behaviourist assumption. Searle imagined someone learning a manual that allowed him to respond appropriately to any Chinese sentence in Chinese even though he didn’t understand the language. Searle argued that this showed that behaving as if one understood something wasn’t the same as understanding it. The Chinese room has been debated ever since. Some push back against Searle’s push back and wonder why being able to do what he does doesn’t mean that he does understand Chinese. Levesque pushes back himself, as a computer engineer, noting that the book would need to be larger than the universe to work. He points out that a book out to just add 20 10-digit numbers (a task considerably easier than translating Chinese) would require 10×200 entries, twice as many as there are atoms in the universe. However, a much much smaller book – an English description of how to add – would in a few pages do the trick and be easily momorized. But then, as Levensque points out, the person would be able to add.

‘If there is no book that would allow a person to get something as simple as the summation behaviour right without also teaching the person how to add, or at least how to add 20 10-digit numbers, why should we think conversing in Chinese will be any different?’ It’s a good question but no one knows whether Searle is right or wrong about his contention. Levesque is content to say that we should tackle the technical questions, like Turing suggested, and see where we end up.

Levesque is impressed by what he calls the Big Puzzle aspect of AI and understanding the mind. The Big Puzzle is the term he uses for the mistake of thinking that there is just one big puzzle to be solved with understanding the mind. He points out that the mind is a plenum of many puzzles, some best resolved in language studies, others as psychology, neuroscience, evolution and so on. His Real AI and its program to research human intelligent behaviour is just one aspect of this, and Levensque readily concedes that current AI research is useful and legitimate. His regret that it’s being done as a substitute for the Real AI rather than alongside it and that approaches in psychology and neuroscience are not going to be able to cover AI’s original challenge.

Psychology, he says, can’t do it because it works by looking at outputs, programs and inputs and the numbers are too vast for anyone to produce testable and buildable models of human intelligence. ‘It’s extremely difficult to design an experiment constrained enough to provide meaningful results.’ How do you control for the different lives, memories, beliefs, goals and so forth when trying to account for the sort of ‘common sense’ thinking targeted by Real AI? He thinks this accounts for the reason why the most successful results in psychology are those targeting thinking that doesn’t require pondering, musing, delay or any kind of conscious cogitation. It’s interesting to note that because results about immediate thinking are pretty robust, these are taken to be keys to understanding intelligence. This seems an example of an ‘Empty Raincoat’ case where easily measured phenomena become reified and generalized and phenomena that a field cannot measure are treated as not existing. This has been catastrophic in many different fields and psychology is no exception.

Neuroscience is also booming at the moment but Levesque thinks it won’t be able to help understand common sense intelligence. The hope of neuroscience is that once we have mapped out how the brain works we’ll be able to understand the mind. But Levesque dismisses this hope as a result of muddled thinking. Think about the insides of a computer, the hardware. It’s a network of switches and circuits, just as the brain is a network of synapses and nerves. But typically the source code of the program being run by the computer is not being run directly. It is translated into an object code, another code in another form more easily translated into the hardware. The point is that although translating the source code into the object code is easy enough, translating the object code back into the source code is well nigh impossible. ‘There is no reason to suppose that having an object code in hand would allow someone to recover the source code that generated it,’ is how Levensque sardonically puts it. For example, multiplication can be a number of operations in the object code. And numbers themselves can be encoded using multiple components that need not be close to each other but rather may be distributed widely. This is a big issue for models of ‘distributed representation’ familiar in neural-net models of the mind. Not only that, but ‘ the state of a single component may be involved in the representation of more than one number.’ As philosopher Jerry Fodor pointed out ages ago when targeting semantic holist mind models such as those presented by the Churchlands, without the source code, how the hell can you know just by looking what any node is actually representing? This is what makes neuroscience so hard and seems to make it a poor place to go if we want to find out about the ‘common sense’ intelligence of the now abandoned Real AI program. As Levensque puts it; ‘We need to look elsewhere.’

This is where Levesque takes up Dan Dennett’s challenge. In one aspect, he egrees with Dennett where Dennett approaches the mind via his ‘design stance.’ Dennett asks: What would be involved in designing something that could do x, where x is observable intelligent behaviour? This is taken up by Levenesque. John McCarthy’s ‘common-sense’ is what the design stance has to explain. For McCarthy common sense is conditioned by stored background knowledge, filling the gap between stimulus and response. Frederick Bartlett in his book ‘Thinking’ back in 1958 writes: ‘ The important characteristics of [the] thinking process as I am proposing to treat it, can now be stated.: The process begins when evidence or information is available which is treated as possessing gaps, or as being incomplete. The gaps are then filled up, or that part of the information which is incomplete is completed. This is done by an extension or supplementation of the evidence, which remains in accordance with the evidence (or claims to do so), but carries it further by utilising other sources of information besides those which started the whole process going, and, in many instances, in addition to these that can be directly identified in the extended surroundings.’ Fodor collaborator Zenon Pylyshyn summarises this as being where the stimulus is neither necessary nor sufficient for any response. Intelligent behaviour is thus what Levensque calls ‘cognitively penetrable’: ‘the decisions you make about what action to perform is penetrated by what you believe,’ in conrast to involuntary reflexes and pure input/output scenarios.

Levesque guides us through the components of the real AI approach. First he sketches what this research program takes knowledge to be. He takes knowing to be about taking the world to be a certain way. It’s an attitude towards a proposition. He takes propositions to be abstracts that have truth values attached – they are true, false, right, wrong and so forth. So for Levesque, knowledge is a propositional attitude where what matters is the truth conditions of the proposition, in other words, what it takes for it to be true or false. Belief is like knowledge except it’s used when knowledge isn’t secure. There are degrees of belief. These ‘propositional stances’ are the basic components of the Real AI project. Where it comes apart from the Dennett-style approaches to mind, and the new AI projects that have followed, is that it denies the other stance within Dennett’s approach, the one we mentioned at the start, his ‘intentional stance’.

What is this? Dennett argues that not just humans but animals like dogs, zebras and bacteria can take a propositional stance. He’s happy to argue that propositional stances are not tied to linguistic aptitude. If they were then only humans could take propositional stances. But what Dennett gives with one hand he takes away with the other. The intentional stance converts all the knowledge and belief propositional stances (indeed all propositional stances, such as those involving wishing, desiring, hoping etc) into ‘as if’ entities. In order to understand the behaviour of a zebra we adopt the intentional stance and say ‘the zebra believes there’s a lion near, so I should run away’. But there is actually no fact of the matter about that ‘as if belief’. Talk of such beliefs, goals, desires and so forth are just stances without facts of the matter. They are neither true nor false because they aren’t propositional. And of course, if it’s true for zebras then it isn’t clear why we shouldn’t approach human intelligent behaviour via the intentional stance as well. Animals display intelligence when they modify their behaviour on the basis of inferences from evidence (as in the zebra and the lion) and if that’s the model adopted towards all intelligence then all ‘propositional attitude’ talk is merely a disguised ‘intentional stance’.

This is where the Real AI program kicks in. If there’s just the intentional stance then what’s the difference between a system knowing something and a system that just finds something stored in a data base? Could big data replace knowledge? For Levensque Dennett and current AI research think there is no intrinsic difference and that Big data could indeed replace intelligence in the long run. The Real AI crew think that if intelligence is modeled in a way that can’t make a distinction between intelligence and sophisticated data retrieval is ignoring important data about intelligence where such a distinction is available. This is why the Real AI crowd find the Turing Test inadequate. By setting up informal conversations it is possible for a computer to fake intelligent responses. The Imitation Game is about deception not about the computer having a conversation.

Winograd Schemas are alternatives to the Turing Test. They are imagined psychological experiments that requires intelligence that is difficult to fake. Answers to the questions asked don’t appear anywhere in previous data bases making background knowledge where the action is. Questions can be varied by a single word making it impossible for a fake response to work. Winograd Schema Tests are therefore tests that aren’t about fooling people but about having conversations.

[Terry Winograd]

This is where the Real AI crowd disagree with adopting the intentional stance for all intelligence and why the shift from this knowledge based approach to neuroscience, statistics, economics, psychology etc doesn’t approach McCarthy’s original target. Levensque writes: ‘ Rather than a computational version of neuroscience or state or whatever, he proposed a discipline with an entirely new subject matter, one that would study the application of knowledge itself – thinking in other words – as a computational process.’

And so Levesque sees the Real AI program as asking: If some intelligent behaviours utilize knowledge, then how is this done? He begins by contrasting two kinds of learning. One kind is learning by extracting patterns and features from data (picking out the cats, say, from an array of Big data). The other is learning from books and instructions. So, for example, some words we learn via experience – like ‘hunger.’ Others we learn via language, such as ‘incarnate.’ When we learn about the world we use both: we learn lemons are yellow by experiencing yellow lemons, and that bears hibernate by reading about them, or watching David Attenborough. We learn behaviours such as bike riding not by reading a manual but via trial and error, picking it up after practice and a little instruction from a friendly parent. We lean how to look after canaries by reading the manual because doing it by trial and error would be sadistic. And the two modes of learning can bring about conflicting beliefs: experience learning shows us that the sun rises but language learning tells us that it doesn’t. Hayakawa says, ‘ It is not true that we have only one life to live; if we can read, we can live as many more lives and as many kinds of lives as we wish.’

Levesque now turns to the engineering problem that the Real AI program faces and explains that the usual engineering approach to a problem won’t fly in this context. Normally the engineering strategy is to first produce a rough form of x and then refine it so that we get what is actually required. But this approach is useless if there are factors that are missed but important right from the off. This is the problem of Long Tailed distributed events. The idea of the ‘long tail’ he takes from Taleb’s book on black swan events, those unpredictable and rare events that discombobulate the smooth running of predictable systems, such as the one that ran with the idea that all swans were white until a visit to Western Australia brought back the news that some were actually black.

Levesque writes: ‘… the problem of trying to deal in a practical way with a long-tailed phenomena. If all your expertise derives from sampling, you may never see events that are rare but can make a big difference.’ He gives the example of driving in a white-out. A white-out is where snow and ice erases everything visible to a driver. Relying on the normal routines of driving in such conditions is disastrous. The driver has to try and utilize common sense to survive the situation. By drawing on stored knowledge – better slow down, recall the news story about the guy who did this etc etc – the driver is using knowledge-driven intelligence that goes beyond merely experience-based learning. Where very rare events can be ignored then systems relying on experience learning can do very well. But where the rare events can’t be ignored, because they are significant, then such a system will do badly. What a system in such a system needs to do is move from mindless to mindful action. To do this we fall back on background knowledge. The general point is this: ‘ Our ability to deal with things like Black Swans does not appear to be the result of engineering and rougher, less versatile forms of behaviour through additional training. Instead, a completely new mechanism appears to be at work, one that learns on background knowledge.’ And if this is right then the question is: what is the mechanism that can do this?

This is where we can see the distance between this approach and Dennett’s ‘intentional stance’ approach. Knowledge is given a genuine causal role, so that it is not an ‘as if there was knowledge’ assumption. In trying to work out how to engineer the mechanism for this role, Levensque and the Real AI team focus on the role of symbols, the kind found in algebra and symbolic logic. Both of these examples of symbol processing use a small number of rules, which is immediately attractive to the AI engineer. How do we do this processing? ‘We are taught a procedure to follow that will do the job: the patterns to look for, the steps to take.’ Once you have the symbols in place and the rules then all that is left is mechanical symbol processing. That’s not the difficult challenge. The challenge is the ingenuity and creativity needed to get from word problems to symbolic expressions capable of such mechanical processing. This is the essence of digital computation. The analogy is that of symbols as mapping from numbers to numbers compared to operations that transform one string of symbols into another string of symbols.

Computation requires that we take a ‘design stance’ as Dennett advises, laying out what a good computation should be thought to be, analyse its properties and then decide over time if that definition is good enough, which is what a Turing machine is. A Turing Machine takes symbolic representations encoded in a linear sequence of bits and then processes them. So, for example, it can take a 2-dimensional array of numbers and lay them out on a tape, row by row, with each number itself laid out as a sequence of bits. Once so encoded the symbols can be mechanically processed using a sequence of small steps each restricted in scale and extent. The incredible beauty and power of the Turing machine is summarized by Levesque: ‘ … every digital computer can be shown to be a special case of a Turing machine – whatever it can do, a Turing Machine can do as well… any function over strings of symbols computable by a modern digital computer is also Turing computable. Most scientists believe that this will be true of any computer we ever build.’

What is important to grasp in this is that symbols here are not being understood as a locus of communication but rather as a locus of computerization. Leibniz is a forefather of this. He saw that when doing arithmetic we interact not with numbers but with their symbolic expressions, as in algebra. He invented calculus to figure out symbolic solutions to problems of areas and tangents. He considered thinking as a matter of ‘going over believed ideas’ and drew an analogy with arithmetic. We don’t interact with the ideas but with the symbolic expression of those ideas. From this has come the thought that the rules of arithmetic are analogous to the rules of some kind of logic, something that Frege, Russell, Whitehead, Wittgenstein and the early Analytic philosophers brought to fruition in the early twentieth century. Mathematics manipulates symbols that mirror the relations among the numbers being represented. Similarly, in logic we manipulate symbols that mirror the relations among the ideas being represented. This insight offers the Real AI program the mechanism for turning abstract beliefs into physical behaviour. Ideas – the objects of human thought – are abstract and formless until represented symbolically. Once so symbolized we can then calculate.

Brian Smith’s ‘knowledge representation thesis’ sets out how this might be done. In the thesis a system stores knowledge in a knowledge base as symbolic expressions. It processes the knowledge base by using rules of some kind of logic to derive new symbolic representations going beyond the initial representation. Some conclusions will concern what the system should do next. The system then decides how to act out these conclusions. This is the basic requirement for going beyond Dennett’s ‘intentional stance’ when dealing with complex behaviour. In this thesis it is a genuine and actual knowledge base that causes the behaviour, not some ‘as if’ supposition.

And Smith’s thesis is required wherever the ‘intentional stance’ is thought to be required. The rule of thumb is ‘if you assume the intentional stance then assume a knowledge base.’ So this kind of system will have a memory with symbolic representation of the ideas. The memory will have two properties. The first is that outsiders to the system can interpret the symbolic structures as propositions of some sort that are believed by the system. The second is that these symbolic structures are not inert. The computational system housing them works on them just as in symbolic logic or algebra. This is what the goal of Real AI attempts to work out. It wants to discover if this ‘knowledge representation’ thesis is true, or at least whether there are reasons for thinking that humans are like that. And then they want to see if they can build one.

Levensque is clear: this is a working hypothesis for the Real AI teams. And they’ve been largely put on ice since the 1990’s so no one knows the answer to these questions. But there are at least on the face of it no reasons for dismissing it out of hand without looking. As Terence Deacon has pointed out, evolution has created a ‘symbolic species’. Perhaps there is a connection between using and processing internal symbols and using external ones. For the reasons we’ve touched on above, reverse engineering neurons won’t give us source codes so we won’t find the evidence needed by looking there.

And that the problems facing the Real AI project are very hard is probably one of the reasons why the approach stalled when it did and why Dennett’s ‘intentional stance’ seemed both less challenging and more promising. For sure, the two big obstacles to Real AI are pretty gigantic. Firstly, no one knows what kinds of symbolic structures are required. And secondly we don’t know what kind of symbol processing is needed to extend representational beliefs to affect behaviour in the right way. In his 1958 paper ‘ Programs with Common Sense’ McCarthy assumed the system would be first order predicate calculus and reasoning to be computational deduction. Basically, the recipe assumed a classical logic as developed by the Analytic philosophers. Since then it has been conceded that this maybe too strict. Classical logic is not the ideal of the symbolic representational language nor of the reasoning space needed. So the role of classical logic in this program is complex. The stricture to ‘use what you know’ is ok but hardly sufficient. For example, not every logical consequence is relevant – think of contradictory beliefs where for every contradictory belief every sentence will be its logical consequence; on the other hand some logical consequences might be relevant but too hard to work out. And then some consequences might not be logical conclusions but rather just reasonable assumptions (I don’t see your hidden lemon, but I assume it’s yellow). And there are many ways of using what you know that are not conclusions. This is why to model knowledge based intelligence on classical logic is thought to be too restrictive, despite brilliant examples of showing how classical logic could handle deeply human thoughts such as vague ones. Marvin Minsky, another founder of the Real AI movement, summarizes the point: … logical reasoning is not flexible enough to serve as a basis for thinking.’ And many philosophers agree. Alternatives to classical logic, such as probability models, run up against similar problems to classical logic – they still can lead to irrelevant true conclusions, or relevant but too hard to calculate conclusions, or conclusions drawn only in the absence of information to the contrary or belief used not as the result of drawing conclusions.

[Marvin Minsky]

And if these weren’t difficulties enough, there is still no idea what symbols should be used. Should they be sentences in a natural language? But then how can it be used to draw conclusions? How can English, say, be our way of providing knowledge if using it properly already requires knowledge? The threatened infinite regress is obvious. Nevertheless the Real AI-ers remain confident that they are right to at least try and work out whether the knowledge representation approach is true. They liken it to Darwinian theory: they have a basic fact, human knowledge, and posit symbol processing as the only plausible story fit to explain the fact, just as evolution has the basic fact of evolution (in the fossil record, DNA etc) and the only plausible story that can explain it, ie natural selection.

What this implies is that Real AI will need a massive knowledge base and a computational implementation powerful enough to process the massive symbolic structures. Levesque summarises the challenge: ‘ Learning to recognise cats by yourself is one thing; learning to read by yourself is quite another; and learning to read Wittgenstein, yet another.’ Any Real AI solution will need to have been spoonfed what we know and be able to use it effectively. And so the question for the Real AI team is an empirical one: ‘what sorts of computational designs will be sufficient to account for what forms of intelligent behaviour?’

Levesque is quite confident that we will in the future build a computer with common sense, that knows a lot about the world and can deal with both routine and unexpected situations. But we still don’t know what’s required and engineering it will be very challenging. The main reason Levesque gives for why we are so far from realizing this possibility is that we don’t want to. There’s no demand for AI with common sense. We seem to like our AI supersmart and dumb. None of our current billion dollar research projects into AI are looking to create fully intelligent AI with common sense. According to Levesque, we’re creating systems that can deal with stable, normal circumstances but which are not able to deal with the unexpected. Levensque is quietly alarmed: ‘ … if this is the future of AI, we need to be careful that these systems are not given the autonomy appropriate only for agents of common sense.’ Automation poses political questions rather than technological ones for the AI community.

What is assumed by this approach to Real AI is that the sort of common sense intelligence humans have can’t just emerge from the present types of AI systems being built. We are building hugely intelligent and powerful AI lacking common sense. The fears of the likes of Hawkings, Musk and Gates, who see AI as posing an existential threat, become nuanced by this realization that the ‘… technology [that] could decide for itself to misbehave…’ will be without common sense. If we weren’t already just a little afraid, Levesque ramps up the alarm just one more notch.

It’s a lovely book, clearly written and with many good examples. It’s a little thin on the philosophers who are working in this area, and some of the discussion would have been deepened by their input. Having said that, Levesque has kept his story succinct and non-technical so that people like me can (just) keep up. It’s a timely book about an exciting and cutting edge technology and research program. The stakes are high, and Levesque has made a powerful case for returning to AI’s origins.

Richard Marshall is still biding his time.

Buy his new book here or his first book here to keep him biding!

First published in 3:AM Magazine: Monday, August 14th, 2017.