Programming with Dyslexia
Christos KatsoulasComputer Science teacher katsoulas.info
Computer Science teacher
Christos KatsoulasComputer Science teacher katsoulas.info
Computer Science teacher
By Noam Chomsky, Ian Roberts, Jeffrey Watumull.
Source: The New York Times
Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.
Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)
But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too suborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.
Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life:
Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Computer Science teacher
This is a list of the best LeetCode questions that teach you core concepts and techniques for each category/type of problems:
https://www.teamblind.com/post/New-Year-Gift—Curated-List-of-Top-75-LeetCode-Questions-to-Save-Your-Time-OaM1orEU
Another list of CSES Problem Set
https://cses.fi/problemset/
The simplest book to get anyone started in studying for coding interviews is the “Cracking the Coding Interview”:
Computer Science teacher
Computer Science teacher
“Everyone in this country should learn to program a computer, because it teaches you to think.” — Steve Jobs
Did you ever think what Steve Jobs was trying to emphasize with this sentence?
Is it about code writing?
Should everyone write code?
Should everyone be a programmer?
NO. Not at all.
“Everyone should learn to code” movement is wrong because it assumes that writing code is the final goal. Everyone, including most of the software developers, thinks that their job is to write code. But actually, it is not. The job of a software developer is to solve problems. It took me years to understand it.
Most people who call themselves programmers can’t even code. Tragically, a bunch of them are not even aware of what their job really is about. Due to this many software programs’ lifetime is too short. Why? Well, they have been developed as a solution for a specific problem but now it can’t even solve that problem due to the unconscious programmers behind it. In the end, the program dies.
If you talk to senior programmers, I mean real programmers, ask them what they think about writing code. They will tell you that the best code is actually no code at all and that a good programmer is the one who knows how to avoid writing unnecessary code lines.
“Everyone should learn to code” movement is not about coding. It doesn’t mean that everyone should be a programmer and develop software that people can use. Essentially,it’s all about problem-solving.
Because programming itself covers a whole range of skills that have real-world uses. Critical thinking, problem analysis & solving, logic, etc. These are skills the current generation of kids seems to be missing out on in their education.
I am not saying that we shouldn’t teach our kids how to code or no one should learn to code. I am trying to emphasize that coding is just a tool to solve a problem. Yes, programming can teach you how to think and how to approach a certain problem. But being a programmer is a completely different thing.
I would rather call this movement “Everyone should learn how to solve a problem” instead of “Everyone should learn to code”.
By Ian Watson.
In 1936, whilst studying for his Ph.D. at Princeton University, the English mathematician Alan Turing published a paper, “On Computable Numbers, with an application to the Entscheidungsproblem,” which became the foundation of computer science. In it Turing presented a theoretical machine that could solve any problem that could be described by simple instructions encoded on a paper tape. One Turing Machine could calculate square roots, whilst another might solve Sudoku puzzles. Turing demonstrated you could construct a single Universal Machine that could simulate any Turing Machine. One machine solving any problem, performing any task for which a program could be written—sound familiar? He’d invented the computer.
Back then, computers were people; they compiled actuarial tables and did engineering calculations. As the Allies prepared for World War II they faced a critical shortage of human computers for military calculations. When men left for war the shortage got worse, so the U.S. mechanized the problem by building the Harvard Mark 1, an electromechanical monster 50 feet long. It could do calculations in seconds that took people hours.
The British also needed mathematicians to crack the German Navy’s Enigma code. Turing worked in the British top-secret Government Code and Cipher School at Bletchley Park. There code-breaking became an industrial process; 12,000 people worked three shifts 24/7. Although the Polish had cracked Enigma before the war, the Nazis had made the Enigma machines more complicated; there were approximately 10114 possible permutations. Turing designed an electromechanical machine, called the Bombe, that searched through the permutations, and by the end of the war the British were able to read all daily German Naval Enigma traffic. It has been reported that Eisenhower said the contribution of Turing and others at Bletchley shortened the war by as much as two years, saving millions of lives.
As the 1950s progressed business was quick to see the benefits of computers and business computing became a new industry. These computers were all Universal Turing Machines—that’s the point, you could program them to do anything.
“There will positively be no internal alteration [of the computer] to be made even if we wish suddenly to switch from calculating the energy levels of the neon atom to the enumeration of groups of order 720. It may appear somewhat puzzling that this can be done. How can one expect a machine to do all this multitudinous variety of things? The answer is that we should consider the machine to be doing something quite simple, namely carrying out orders given to it in a standard form which it is able to understand.” – Alan Turing
By the 1970s a generation was born who grew up with “electronic brains” but they wanted their own personal computers. The problem was they had to build them. In 1975 some hobbyists formed the Homebrew Computer Club; they were excited by the potential the new silicon chips had to let them build their own computers.
One Homebrew member was a college dropout called Steve Wozniak who built a simple computer around the 8080 microprocessor, which he hooked up to a keyboard and television. His friend Steve Jobs called it the Apple I and found a Silicon Valley shop that wanted to buy 100 of them for $500 each. Apple had its first sale and Silicon Valley’s start-up culture was born. Another college drop-out, Bill Gates, realized that PCs needed software and that people were willing to pay for it—his Microsoft would sell the programs.
Turing’s legacy is not complete. In 1950 he published a paper called “Computing machinery and intelligence.” He had an idea that computers would become so powerful that they would think. He envisaged a time when artificial intelligence (AI) would be a reality. But, how would you know if a machine was intelligent? He devised the Turing Test: A judge sitting at a computer terminal types questions to two entities, one a person and the other a computer. The judge decides which entity is human and which the computer. If the judge is wrong the computer has passed the Turing Test and is intelligent.
Although Turing’s vision of AI has not yet been achieved, aspects of AI are increasingly entering our daily lives. Car satellite navigation systems and Google search algorithms use AI. Apple’s Siri on the iPhone can understand your voice and intelligently respond. Car manufacturers are developing cars that drive themselves; some U.S. states are drafting legislation that would allow autonomous vehicles on the roads. Turing’s vision of AI will soon be a reality.
In 1952 Turing was prosecuted for gross indecency, as being gay was then a crime in Britain. He was sentenced to chemical castration. It’s believed that this caused depression, and in 1954 Turing committed suicide by eating an apple poisoned with cyanide. Outside of academia Turing remained virtually unknown because his World War II work was top-secret. Slowly word spread about Turing’s genius, his invention of the computer and artificial intelligence, and after a petition campaign in 2009, the British Prime Minister Gordon Brown issued a public apology that concluded:
“…on behalf of the British government, and all those who live freely thanks to Alan’s work, I am very proud to say: we’re sorry. You deserved so much better.”
June 23, 2012 is the centenary of Alan Turing’s birth. I’m happy to say that finally Turing is getting the recognition he deserves, not just for his vital work in the war, but also for inventing the computer—the Universal Machine—that has transformed the modern world and will profoundly influence our future.
Computer Science teacher