For a better formatted version of this blog-post, please go to https://chileantheoryguy.substack.com/p/mathematics-philosophy-not-poems?utm_source=post-email-title&publication_id=1265277&post_id=93081777&isFreemail=false&triedRedirect=true.

What can be as silly as learning how to surf from a book? (No offense to Kenneth Martin, whose book I havenā€™t read.)

Description of image
Figure 1: A book about learning how to surf.

In this post, Iā€™ll try to convince the reader that a valid answer might be: ā€œlearning how to Math from a book!ā€.

Thanks for reading Bernardoā€™s Substack! Subscribe for free to receive new posts and support my work.

Intentionally, Iā€™m using Math as a verb in the same way one uses Surf both as a verb and for the sport itself, and I think this distinction is pretty darn important.

I have never surfed, but Iā€™m inclined to think that reading books about surfing can be instructive in a particular sense; you can read about Surf, about its different elements, about what kind of weather is ideal for surfing, the wood that makes the best boards, etc. But youā€™re not exactly learning how to Surf; at most you could get some ideas about things to have in mind once you go to the beach, which is when the ā€œlearning how to Surfā€ would happen. I think most people would agree with me on this, but they would argue back that Math or Philosophy are different, and those subjects you can indeed learn from a book. When asked ā€œwhat is the difference?ā€, Iā€™m inclined to think they would answer that itā€™s because Surfing is a physical activity, that you need to learn with your body, whereas Math or Philosophy are brainy stuff and thus a book going through your brain is enough to learn them. I think this is just wrong, and that learning how to Math from a book is as silly as learning how to Surf from a book!

To clarify right away, I think Math books are extremely important and helpful, but in a similar way that a surfing book can be important and helpful; I believe one could get a lot of out a surfing book if youā€™re going to the beach often, and contrasting your hands-on experience with the book, using it as a guide to making your hands-on in-water learning experience more effective, but never as a replacement for it. For Math, I believe one should not ā€œread booksā€, but rather have them as companions for Mathing. In fact, I think the fact that one can just go to a coffee shop with a Math book and a notebook and Math away is an amazing gift of life; you can Math accompanied by the great giants of the present and the past, and thereā€™s not even the need to stand on their shoulds as Newton famously did. Just Mathing side by side with them, behind them even, is an amazing opportunity.

The story however is far from over. In the remaining of this post Iā€™ll ambitiously attempt the following:

  • Present more support to these previous claims, relating them with philosophy and poetry.

  • Dive into the pragmatics of how love for Math can get complicated, and what has worked for me as a way to recover the flame in our relationship, that has definitely been hurt multiple times across the years.

  • Frame things in a way that might be helpful for establishing connections with other aspects of behavioral pragmatics that transfer across disciplines.

Immanuel Kant, my favorite philosopher of times, is famously quoted to have said something along the lines of

ā€œOne should not learn Philosophy, but rather how to Philosophyze.ā€ ā€“ Immanuel Kant (Freely quoted).

A concrete citation is the following, from his lecture note on educational aspects of Philosophy.

ā€œOne can thus learn philosophy, without being able to philosophize. Thus whoever properly wants to become a philosopher: he must make a free use of his reason, and not merely an imitative, so to speak, mechanical use. [ā€¦] How can one learn philosophy? One either derives philosophical cognitions from the first sources of their production, i.e., from the principles of reason; or one learns them from those who have philosophized. The easiest way is the latter. But that is not properly philosophy. Suppose there were a true philosophy, [if] one learned it, then one would still have only a historical cognition. A philosopher must be able to philosophize, and for that one must not learn philosophy; otherwise one can judge nothing. [ā€¦] One can make a distinction between the two expressions, to learn philosophy and to learn to philosophize.ā€‚To learn is to imitate the judgments of others, hence is quite distinct from oneā€™s own reflection.ā€ (Lectures on Pedagogy, pulled from Manchester University)

This citation really captures most of the issue for me. But thereā€™s still a lot left; in particular, what does the difference between learning Philosophy and learning to Philosophyze look like? What shall we do in practice the next time we sit at a CafĆ©?

I canā€™t resist pulling out another reference, this time from the great Mexican poet JosĆ© Emilio Pacheco. Decades ago, he wrote this beautiful not-poem that captures part of the same idea, but this time in relation to poetry. I have now the pleasure of introducing you, dear reader, to my English translation of this brilliant not-poem, that as far as I am aware is not available in English anywhere on the internet.

In defense of Anonimity. A letter to George B. Moore to deny him an interview. JosĆ© Emilio Pacheco (1939-2014). </b> I donā€™t know why we write, dear George, and sometimes I wonder why later on we publish what we have written. In other words, we throw a bottle to the sea that is full of garbage and bottles with messages. We will never know to whom nor where will it be carried by the tides. Most likely, it will succumb in the storm and the abyss, in the bottom sand, that is death. </b> And nonetheless, this act of a castaway is never useless. Because on a Sunday you call me from Estes Park, Colorado. You tell me that you have read whatā€™s inside the bottle (across the seas: our two different languages) and you want to interview me. How to explain to you that I have never given an interview? That my ambition is to be read, not to be ā€œwell knownā€? That is the text that matters and not the author of the text? That I disapprove of the literary circus? </b> I then receive your enormous telegram (how much must you have spent, dear friend, to send it) I cannot answer nor remain silent. And these verses come up to me. It is not a poem. It does not aspire to the privilege of poetry (it is not voluntary). And I will use, as the ancients did, the verse as an instrument for all of that (tale, letter, treatise, drama, story, agriculture manual) that today we say in prose. </b> To begin not-answering you Iā€™ll say: I have nothing to add to what is there in my poems, I am not interested in commenting on them, I am not worried about my place in ā€œhistoryā€ (if I have any). I write and that is all. I write: I give half of the poem. Poetry is not black signs on the white page. I call poetry to that place where of encounter with foreign experience. The reader will make (or not) the poem that Iā€™ve only sketched. </b> We donā€™t read others: we read ourselves in them. I find it a miracle that someone I donā€™t know can look at themself in my mirror. If there is a merit in this ā€”said Pessoaā€” it belongs to the verses, not to the author of the verses. </b> If by any chance one is a great poet, one will leave behind three or four valid poems, sorrounded by failures and drafts. Oneā€™s personal opinions are truly of little interest. Weird world we live in: the interest in poets is every day a bit bigger, and the interest in poems every day a bit smaller. The poet stopped being the voice of the tribe, the one who speaks out for those who donā€™t. The poet has become another entertainer. Their drunkenness, their sexual scandals, their clinical history, their alliances and beefs with the other clowns of the circus, or the trapeze artist or the elephant tamer. They have assured a wide audience who now does not need to read their poems. </b> I keep thinking that poetry is something else: a form of love that exists only in silence, in a secret pact between two personhoods, between two that almost always are strangers to one another. By any chance did you read that Juan RamĆ³n JimĆ©nez, thought half a century ago about editing a poetry magazine that was going to be called Anonimity. Anonimity would publish poems, not signatures; it will be made out of text and not out of authors. And I wish, as the Spaniard poet wished, that poetry was anonymous, since it is collective (to that I aim my verses and my versions). There is a chance that you will agree with me. You, that have read me and do not know me. We will never meet, and nonetheless we are friends. </b> If you have enjoyed my verses, What does it matter that theyā€™re mine / from another / from noone? The truth is, the poems youā€™ve read are yours: You, their author, who invents them when reading them.

Wow. Just wow. Amazing isnā€™t it? If there is ever a contest for the coolest ways of denying an interview, I believe this should be enough to take the first three places alone.

This not-poem (out of respect to JEPā€™s will) transcends far beyond my point in this essay but meets it somewhere along the way: poetry beyond reading the work of another, but rather of working it out by the practice of reading it. Perhaps Pachecoā€™s point is partly that the important thing once again is not poetry, but Poeting, which is halfway done by the writer, halfway done by the reader who poets the verses out one by one whilst reading them. Once humanity is extinguished (if ever) there will be no poetry, merely black signs printed on white pages.

So how does one Poet? how does one Math? how does one Philosophyze? Iā€™ll try to sketch a couple of ideas that have been personally important to me.

Also, this is probably a good time for you, dear reader, to take a short break before keep reading.

Part II: Pragmatics that you probably already know but are good to remember

Getting the game

The idea of ā€œgetting the gameā€ has probably been the most important idea Iā€™ve learned in my lifetime. Itā€™s probably pretty obvious in hindsight, but some of us need the extra help.

First, in this framework, I think of many things as games. Math as a game. Poetry as a game. Philosophy as a game. Music as a game. Dentistry as a game. Dating as a game. Living life as a game. The semantics of this should become clearer as we go. These games are composed of sub-games, for example, Math can be a sub-game of the game of life, and your Calculus class a sub-game of Math. The idea now is that I want to do well at a game, and enjoy playing it. Some games I choose to play, and some games Iā€™m forced to play by external forces. So how do you do well at a game, and enjoy playing it? Getting the game is always the first step.

Getting the game means developing an understanding of why other people have liked this game in the past, and why they have found it useful or interesting. It means developing an understanding of the mechanics of the game, the winning conditions, and of the prizes at stake. It means developing some honest respect for the game and the good players and the good moves.

Letā€™s take dentistry as an example. Hereā€™s how I started really enjoying going to the dentist. I once talked to a dentistry student I met in Chile, and they were really passionate about dentistry. Weird, right? So it made me curious, what the #%@& do you like about dentistry? We talked a bit about it and it got me curious. After a bit of thought, I guess itā€™s actually pretty cool that our bodies have teeth, these marble-looking things in our mouths that serve as the first interface between food and our body. Isnā€™t it kinda crazy that our DNA encodes the fabrication of these pieces, and that it actually includes doing it twice; baby teeth and permanent teeth? And theyā€™re so delicate and in constant interaction with external bodies that they require a lot of additional care, more so than other parts of our body. That theyā€™re far from uniform, with different teeth serving different purposes and their shape and structure are accordingly different. Isnā€™t that kind of freaking cool? If you had sat me down in an abstract world to design the way large animals would get their nutrients, I donā€™t think Iā€™d have ever come up with such an amazing solution. So how did dentistry originate? what were the cornerstones of its development into its modern form? What are the most important open problems in dentistry? What cool questions about teeth are there that we donā€™t have answers for? This is what I mean by start getting the game of dentistry. Once you get a bit more of it, itā€™s pretty cool, and then once a year you get to visit someone who works with teeth full-time and gets to look at yours, your particular set of teeth, what is wrong with them? what is good about them? am I gonna lose them? whatā€™s the right way to take care of them? All of these start being cool questions once youā€™ve got more of the game. Sure, the procedure might still hurt, but now itā€™s a painful part of a game you get. It makes it much better for me. I really recommend this animated video (in French, but you can use auto-translate), probably meant for kids, about what the heck is happening inside your mouth.

Now letā€™s think about chess. It might seem like a boring nerdy game to a lot of people, but oh boy itā€™s beautiful once you get the game. Once you start getting the ideas behind gambits, behind openings, behind castling and king safety, once you start getting a good grasp of what the pieces are worth, and all the fascinating sub-games inside of chess. Once you get more of the game, not even being good at it, you can watch a video of Magnus Carlsen playing, and oh boy isnā€™t that beautiful? Itā€™s a pure display of elegance and mastery in a way beyond the dreams of the beginner. And even though I assume a ton of the deep things going on are flying right over my head, the tiny superficial portion that a noob like me is able to appreciate is already mind-blowing; some moves are so freaking cool they make you want to stand up and clap.

I was 14 when I started getting the game of academics. Before that, I was a really bad student. I had bad grades and bad behavior at school. So much so that I was left in ā€œconditionalā€ state at the second school I transferred to, the Chilean term for when youā€™re given a last warning before getting kicked out (which would have really limited my options of going to a good college). But the next school year something changed.

We had a new math teacher, and he started the year with trigonometry. Itā€™s hard to find something that sounds worse to a student thatā€™s at risk of getting kicked out of school at 14 than trigonometry. But he started the class talking about this Greek guy, Hipparchus, much before Christ, trying to understand what was going on in the skies; where was the moon gonna go next, where was the sun gonna go next, how could you locate back your homeland if youā€™re lost in the sea and all you have is the stars as a guide. Ɓlvaro Sanchez, my new math teacher, really made me think about this dude, perhaps sitting on a boat in the night, looking at the stars and thinking about whether he could figure things out, whether he could understand how the brilliant corpses in the distant sky work, and act upon it, to orientate oneself, to predict the tides, etc. I saw something in there, and Iā€™m grateful to this day to Ɓlvaro Sanchez for gifting me that moment. For the first time, I realized that there was, in the dreadful subject of Maths, a game that Hipparchus had engaged in, and for the first time, I had the feeling that I could also, perhaps, borrow the joystick for a second and play. Everything changed for me there. I started seeing the manipulation of trigonometric equations in a similar fashion I see a chess tactic now, or a punching combo in Street Fighter.

Since then, whenever I try to learn something new, I try to get the game first. Even if I am forced to do or learn something, even if itā€™s not appealing to me personally, I try to get why others have cared, and what others have been captivated by inside the subject. Iā€™ve also learned that playing a game you donā€™t get is sometimes the only way to get the game, and there will be more on this particular point later on, in a chewing vs swallowing sub-section. But itā€™s always important for me to remember that excitement is not zero-sum; you can get more excited about more things all the time, as a conscious decision, and it seems to only make things better.

When there are no friends in sight, look for the enemy

This one has also been huge for me, it is about the importance of identifying roadblocks. Whenever thereā€™s something I want to do, letā€™s say a theorem I want to prove, there are basically two cases:

  1. I have a pretty clear idea to try.

  2. I have no idea about what to try next.

When on case 1., thereā€™s not too much to think through; try the idea, if it works, champagne, if it doesnā€™t, repeat until case 2.

So the main issue is what to do on case 2; when there are ā€œno friends in sightā€ (i.e., clear ideas of what to do next). This perhaps obvious technique is that then the next thing is to identify what the ā€œenemyā€ is. If the theorem is not-trivial, then there must be a reason why it is not trivial; some obstacle along the way. If the theorem is of the form ā€œAll Xā€™s are Yā€, then look for what the obstacle to Y is. If everything is a Y and there are no obstructions, your theorem is free. So there must be something preventing some things from being Y; what are those? Can some X fall under that obstacle?

The point here is that at all points there should be a concrete enemy preventing you from accomplishing the task. This doesnā€™t mean that defeating the enemy will be easy, but there should be an enemy. Okay, but once you know the enemy, how do you defeat it? Hereā€™s the trick; with the same recursive procedure. Do you have a pretty clear idea of how to beat the enemy? If not, there must be something in the enemy thatā€™s obstructing you from it, what is it? Oh is it the big scary Bazooka theyā€™re carrying? Well, you have a pretty clear idea of how to avoid receiving damage from a Bazooka right? No? Then whatā€™s between you and that?! and so on.

Note of course that this is only a methodological procedure with no guarantee of success; if at any point one thinks, not only I donā€™t have any idea of how to avoid receiving damage from a Bazooka, but rather Iā€™m utterly convinced that this is impossibleā€¦ Thatā€™s great too! Canā€™t you just prove that it is impossible?! Uhmmm, well not quite, itā€™s hard to prove the Bazooka is going to kill me regardless of what I do, I mean the enemy could miss the shot, or maybe the Bazooka is not loaded. Very good, is there a strategy that safely allows you to check whether the Bazooka is loaded? Perhaps throwing a decoy instead of yourself?

You see the point: itā€™s hard to get fully and completely stuck working like this because whatever youā€™re trying is either possible or impossible; so if you do methodologically sound steps, one way or another you should gain something. If thereā€™s a path from your current state of knowledge A to your desired state of knowledge B, then that path must go through some intermediate state C that is very close to A, and your mission is to not focus on B and how far it seems, but rather on where C is. A different matter is whether things are achievable in a given timeframe, or whether theyā€™re worth attempting at all given their probability of failure. But I think is crucial to be confident in that if you really really want to do something, itā€™s hard to be 100% mentally stuck, as decomposition techniques should get you closer to smaller and smaller tasks, that at some point should be atomically solvable, or atomically impossible. Note as well that ā€œconceptually stuckā€ is different from cases like ā€œI canā€™t advance on this paperwork until getting Claireā€™s signature on this other formā€. Then you can be caught in a deadlock, but at least it is not a conceptual deadlock.

Letā€™s work through an extremely simple example.

Theorem. All trees (i.e., connected acyclic undirected graphs) are bicolorable.

If you can immediately think of a proof, pretend you donā€™t for the sake of the exercise. Iā€™ll pretend so.

Okay, so are all graphs bicolorable? If so, then we have it! No? So there are non-bicolorable graphs?! Oh, By trying on paper I got the triangle!

Is the theorem false?! Not really, because the triangle is not a tree, as it has a cycle. But this is good, we found an enemy and his name is the triangle. Is there a way, even though the triangle itself is not a tree, that it still wins an enemy and prevents our theorem from being true, say because itā€™s a part of a larger tree? Well, not quite either, because a tree couldnā€™t really have a triangle as a part, that would break acyclicity. Okay, the triangle is defeated, so are we done? It seems that what things are pointing to is ā€˜ā€˜All graphs that donā€™t contain triangles are bicolorableā€. Is that the same as what we are trying to prove? Well not quite, a square doesnā€™t contain a triangle and yet itā€™s not a tree.

But the square can be colored with 2 colors! Not really an enemy of bicolorable graphs. Okay so ā€˜ā€˜All graphs that donā€™t contain triangles are bicolorableā€ is an appealing idea; it even seems a nice converse to ā€œAll graphs that contain triangles are not bicolorableā€, which we already know by now! Squares are no problemo, so if thereā€™s any difference between the truth value of ā€˜ā€˜All graphs that donā€™t contain triangles are bicolorableā€ and that of our theorem, it must be because of longer cycles, longer than 4 in particular.

Huh, also cycles of length 5 require 3 colors! What about 6? huh, 2 colors! What about 7? huh, 3 colors! What about 8? huh, 3 colors! Okay, the pattern is clear. It seems that cycles of odd length are not bicolorable; good thing trees donā€™t have them! Can we prove that if your graph doesnā€™t have cycles of odd length, then it is bicolorable? What would be an obstacle? A graph that is not bicolorable and yet doesnā€™t have cycles of odd length. Letā€™s look for one. After a bit of pen-and-paper time, one should get frustrated: I donā€™t seem to find any examples. Things start to point out to ā€œbicolorable if, and only if, no odd-length cyclesā€.

Exercise to the reader: continue this extremely painful proof method until completion, or perhaps with a different (but easy!) problem. I really recommend doing it.

For a longer exposition on this idea, I really encourage the reader to go for Solving Math Problems Terribly. Solving problems terribly is an amazing skill to learn!

Once youā€™ve kept this idea in mind, the next step is to use it extensively in pedagogy. Itā€™s really nice when proofs depict the enemy, and they show why it canā€™t hurt you, instead of just walking through the grass while leaving the reader wondering why no enemies attack. An example of a proof that I wouldnā€™t like to read about trees being bicolorable is the following:

Define trees inductively as either an empty graph or a vertex (which we call root) from which an arbitrary collection of trees hang. Now letā€™s prove by induction the stronger statement that all trees can be colored red and blue with the root receiving the color red. Trivial for a single vertex, and for the inductive case, color the root with blue, and each hanging tree via the inductive hypothesis. If feeling generous to the reader, argue that this is a fine coloring overall, because the only edges are those inside the trees (fine by inductive hypothesis), and those from the root to each tree root, which is fine because they are red-blue edges. Finally, invert all colors to preserve the inductive hypothesis.

This is correct, but where is the enemy?! I donā€™t think good proofs have to be extremely explicit in identifying the obstacles, but it sure helps, and if anything, the non-triviality of your theorem is justified precisely by the number and difficulty of overcoming the different obstacles, so you might as well show them properly.

Itā€™s very important to find the smallest enemies one can tackle at the time. Usually, when you play a video game, difficulty can be bad in two ways; either the game is too easy, which makes it boring, or it is too hard, which makes it frustrating. I have never heard of a student quitting their Ph.D. because they found it to be boringly easy, so our case is always that of fighting against a game that gets frustratingly hard at times. And how can games be less frustratingly hard and thus more enjoyable? By having a well-calibrated progression of difficulty. This is not trivia; game designers (both video- and board games), need to spend a lot of time balancing the game so that itā€™s not too easy nor too hard. When you play a videogame, you donā€™t start fighting the boss right away, but rather you warm your way up to it by beating a bunch of weak little monsters. The weak little monsters in mathematics, at least for me, are concrete small examples of what I want to prove. Organizing the quest in a way that has a nice progression of difficulty is really important for me to not get frustrated and quit, and itā€™s not a trivial task; it requires conscious effort.

Street-Fighting Mathematics

This is a fantastic term I learned from Ryan Oā€™Donnell, which pointed me to an older reference, an eponymous book by Sanjoy Mahajan.

So what the heck is street-fighting Mathematics?! First, street-fighting is a term that as opposed to other forms of fighting like Karate or Boxing, refers to a fight without rules, where everything is allowed: hair pulling, punching at the crotch, etc.

The idea of street-fighting mathematics for me is to rebel against a preconceived notion of math as being elegant and correct at all points, justified from the beginning in all of its steps; rule-respecting. There are no rules, whatever you think could work to help solve the problem youā€™re interested in, you should try. Use a computer, ask your friends, change the theorem statement so now itā€™s easier, assume all graphs will have only 6 vertices, assume Ļ€ is equal to 3, use all the dirty tricks physicist use, etc. For the love of god, please refrain from mocking physicists for making their life easier by assuming stuff; ought to do the same first, and then start checking whether you could the same with one fewer assumption.

I try to street-fight my way around everything honestly, obeying only the minimum set of rules I actually think I must follow. Iā€™ve realized some of the best mathematicians I know follow this principle too, either consciously or unconsciously, they take mental shortcuts and try to see if the gaps can be filled later. The justification for the effectiveness of this technique appears to me as being something like this:

Our brain, even when thinking about abstract concepts and formal symbolic manipulation, is driven by intuition, and details come later on, once the intuition has done the initial chopping, like our teeth that mechanically reduce our bites into much smaller pieces that then can be swallowed and absorbed. This is similar to the following: if you try to memorize the sentence ā€œThe quick brown fox jumps over the lazy dogā€ and then say it out loud without looking, this is pretty easy, but now that you memorized it, spell it out loud. This is harder, and usually, our strategy for doing it is going back and forth from ā€œspellingā€ mode, to ā€œremembering the next wordā€ mode. Things can be substantially easier by not doing them right first, by going for the big picture first, and by filling in the gaps later. In other words, when you have to fill in a box at a Chinese buffet, put in the spring rolls first, and the rice later to fill in the empty spaces.

Let me give a concrete example of how this technique is used.

Problem. Let us say a number whose digits in base 10 are only zero and one is a ā€œbinary impostorā€. So 10001 is a binary impostor and so is 1110, but not 1030. Now prove that there are infinitely many binary impostors divided by the current year.

If you read this in 2022, the problem is not too hard. 2022*5 = 10110, which is a binary impostor, and we can always add more 0s, making for infinite binary impostors. This easy case gave us the idea of finding a single multiple of the year that is a binary impostor, and then padding it with 0s. But what if youā€™re reading this in 2023?! The first thing to do, unless you immediately see the solution, is to go to Python and print the first 100 multiples of 2023. Huh, unfortunately, none of them is a binary impostor. What about 2024? Huh neitherā€¦

The opposite way around, print the remainder mod 2023 of randomly generated binary impostors.

import random

def to_bin(n):
    if n <= 1: return n
    return to_bin(n//2)*10 + n%2

rems = []
for _ in range(50):
    bi = to_bin(random.randint(100, 5000))
    rems.append(bi%2023)

print(sorted(rems))

Hereā€™s what I got.

Notice something? 230 appears twice! Now Iā€™m curious if the numbers that gave rise to these 230s look weird.

Now 230 doesnā€™t show up, but I see 3 different random binary impostors that are 56 mod 2023. This is perhaps useful, but I donā€™t see a pattern right away.

Binary impostors are constructed (see the Python code) by taking one, and adding a 1 or 0 are the end. What does that do mod 2023? I donā€™t even think now, just go to Python; street-fighting. I get some output, but still donā€™t see any patterns. What to do?! Iā€™m going to cheat now and pretend the question was with a number smaller than 2023, because spotting patterns in these decently large numbers is not obvious.

What about 2? Mmh, but I immediately see 10 as a multiple. What about 3? The first binary impostor divisible by 3 seems to be 111, I could have thought this, but I shamelessly just coded it. Huh okay. What about 4? Then itā€™s 100. What about 5? Then itā€™s 10 again. What about 6? Itā€™s 1110. Okay, too many similar questions, Iā€™ll just do the little piece of code that prints for every small value of N the smallest binary impostor divisible by N.

import random

def to_bin(n):
    if n <= 1: return n
    return to_bin(n//2)*10 + n%2

rems = []
M = {}

def first_bi_div(n):
    for i in range(1, n*n*n):
        if to_bin(i)%n == 0:
            return to_bin(i)

for i in range(2, 60):
    print(first_bi_div(i))

You might ask now, why the n*n*n upper-bound? I donā€™t know, just made it up, seemed big enough and tractable; street-fighting.

I look at the output and now I notice something; these donā€™t look like the random ones I was generating earlier, they have lots of 1s in a row or lots of 0s in a row.

Question: if I consider numbers of the form 111ā€¦0000, what are they divisible by?

Okay, obvious observation, the 0s at the end will make for 2s and 5sā€¦ But 2023 is not divisible by 2 or 5, so not super helpful. At least now I know that I should look at impostors ending in 1.

Except for the first one, they have prime divisors, and 3 pops up (2 and 5 are discarded by the previous idea), 7 appears, 11 appears, and 13 appears, but I donā€™t see 17. I look at my previous code and it tells me the first binary impostor divisible by 17 is 11101. I now wonder if at some point 1111ā€¦..111 will be divisible by 17. Let Python figure it out. I do the first hacky code I come up with. The answer is 1111111111111111; incidentally only a single 1 more than what I tried with the factor Unix tool. Reminds me of this meme.

Okay, so Iā€™ll keep going and checking whether the numbers of the form 111ā€¦.111 are eventually divisible by anything (that is a multiple of 2 or 5).

Huh, is true up to 100 at least. Promising stuff!

It quickly found one divisible by 2023! Our original problem is done.

But now I want more, it seems to be true for any number!

This is a fairly large number of 1s though, I donā€™t know how much further I can push the computerā€¦ I guess I donā€™t really need to keep all these 1s on the computer, Iā€™ll just see how adding an extra 1 affects the result mod 2023. I do this but donā€™t see an obvious patternā€¦ At some point, the sequence of mods repeats itself naturally, so Iā€™ll check what the period is.

The first binary impostor of the form 1111ā€¦.111 divisible by 2023 has 816 ones, the next has 1632, and so on. 816 is the period. Quite cool actually, so I donā€™t even need the 0-padding, Iā€™m actually showing that there are infinitely many unary impostors divisible by 2023!

What if itā€™s not 2023 though? Well, I realize that the sequence mod N must be periodic as well, with a period smaller than N. The problem is that perhaps in all that period it never gives me a 0 mod N. If all numbers have infinite many unary impostors divisible by them, then this never happens. If it happens, then there are no unary impostors divisible by them, but there are pairs of unary impostors that have the same remainder mod N. Now I see it, if I subtract the smaller unary impostor from the larger one, I get a binary impostor, and it has to be 0 mod N, because itā€™s the subtraction of two numbers that are equal mod N :)

Nice! I finally have infinitely binary impostors divisible by any given year :)

Exercise to the reader. Play around and see whether itā€™s true that all numbers not divisible by 2 or 5 have infinitely many unary impostors divisible by them.

Now here is how Iā€™d probably write the proof of infinitely many binary impostors divisible by any natural N in a paper:

This sort of writing hides the attempts and failures of the mathematician behind it, and even though concision is a very positive quality, itā€™s worth thinking about this trade-off, and about how thereā€™s so much more behind the scenes of a proof that a first glance shows. Itā€™s very common to hear amongst undergrads in Math classes that ā€œthereā€™s no way in hell I couldā€™ve come up with that proofā€. They might be right some of the time, but other times they just ignore the highly non-linear path the authors might have taken to get where they got. Itā€™s very dangerous for one education to think that the reason you couldnā€™t come up with that proof by yourself is because of some fundamental difference between you and the authors, e.g., theyā€™re simply smarter; they might be smarter, who knows, but that doesnā€™t have much to do with the phenomenon at hand.

Of course, this methodology is harder to apply to more abstract problems, and itā€™s not a recipe for problem-solving, which remains wide open. The idea for me is to avoid ā€œmathematiciansā€™ blockā€; similar to how writers get blocked in front of the white piece of paper, the mathematician gets blocked in front of the whiteboard, the computer, or the white piece of paper. The standard advice to go past the well-documented writers block is: just sit down and write something, anything; make it as bad as you need to be able to write something, but write! Street-fighting and looking for obstacles are key aspects of my methodology to avoid the mathematiciansā€™ block.

Chewing vs. Swallowing

Something I struggled with for a long time, and Iā€™m just starting to be a tiny bit better about, is that sometimes one gets in the trap of trying to understand every single thing that is required to do something, prove a theorem, understand a paper, fill out a tax-form, etc. Most of the time, this is simply a procrastination strategy to avoid doing whatever we are dreading.

One could think that the best thinkers really process all the details, and understand every single thing that theyā€™re reading, and every single previous result required to understand what theyā€™re reading, and so on. My experience with the best philosophers and mathematicians Iā€™ve met is quite the opposite; they swallow a lot of the stuff, rather than careful timid wise owls I associate them more with fast-moving astute foxes; they try things rapidly, look up stuff, constantly assume things, swallow other peopleā€™s results without chewing them, and recognize when what they want to do requires going back to the previous result and chew it out to obtain the missing bits.

This point is very tightly related to having clear goals; once goals are clear, this also sheds light on which parts of the process one should chew and really try to understand, and which parts of the process one can just swallow.

Do what you want ā€” or even better, donā€™t do what you dread

A key component of my mathematical growth is to try as hard as I can to work on problems and approaches I really want to work on, regardless of whether other people find them silly or useless. Cultivating my own excitement for mathematics is globally more important for my career than an impactful paper, so itā€™s not worth losing much global excitement to win some local recognition points.

For example, I have published 2 papers at FUN with algorithms, a conference about CS applied to fun contexts. One about the game of Hangman, with JĆ©rĆ©my Barbay, and one about Wordle with Daniel Lokshtanov. Itā€™s very likely that they wonā€™t serve my career much, but I enjoyed working on them, and it cultivated my love for Maths. In hindsight, Iā€™m pretty happy I did it; not only I had fun, but they made me a better mathematician.

The website for FUN with algorithms had at some point the following quote by the great Donald Knuth:

ā€œā€¦pleasure has probably been the main goal all along. But I hesitate to admit it, because computer scientists want to maintain their image as hard-working individuals who deserve high salaries. Sooner or later society will realise that certain kinds of hard work are in fact admirable even though they are more fun than just about anything else.ā€

A practical application of this idea for me is in the negative case; when thereā€™s something I donā€™t want to do, I try really hard to not do it. I mean that I try hard to explore whether I actually really really really have to do it, ā€œand thereā€™s not another way?ā€, and see where that leads. A good fraction of the time I figure out a workaround that allows me to not do the thing I donā€™t want to do. The other fraction of the time, Iā€™m confident that the dreadful thing really really has to be done, and I know of a good reason for that, which helps me do it. So if oneā€™s to be doing something one doesnā€™t enjoy, at least it needs to be something one really must do.

An example I read recently was about someone on Twitter who really hated doing dishes, and so they just started buying plastic disposable silverware and plates. Maybe this is not great for environmental reasons, but the mindset is great; if I dread this thing so much, how can I not have to do it anymore?

A step beyond this; I try to collaborate as much as I can with people that share this value, as I feel more comfortable discussing research aesthetics with them. I believe research aesthetics and compatibility are real things that shouldnā€™t be discounted for building effective collaborations. Iā€™m happy, for example, about my advisor Marijn Heule being explicit about working on the problems we really want to solve and see solved.

A lot of the magic happens in post-production

This part can be summarized as ā€œdonā€™t finish too quicklyā€. I learned this very recently in life, and it improved my mathematical, philosophical, and literary skills immediately; very few pieces of learning have done that for me.

The main idea is that, perhaps because of some internal mild form of anxiety, one really wants to finish the proof, the paragraph, the argument, and be done. You wrote the last sentence or said the last phrase, and youā€™re done. Nice.

But the thing is, a lot of the magic happens in post-production, once you sit down calmly with the material you have created and squeeze the lemon until the last drop. When you see a wonderful video on YouTube, it didnā€™t look like that the whole time. It probably looked like a disaster if you had been able to see the intermediate stages. The last 20% of the effort, at post-production, can really be the 80% of the ā€œwow, this is so high-qualityā€ effect.

So my intention here is to never be quick in brushing off a proof once I think I have it. A lot of the time for me, the real gain in understanding comes after having come up with a proof, but when Iā€™m looking back at this disastrous creation and cleaning it up. I try to ask myself ā€œwhy is it that I was able to succeed with this method here?ā€; I had an idea A, and it was not obvious to me before that A was going to work. Then I went ahead and tried it, and it turned out to work, but notice that if I stop here, because my proof is ready, I still donā€™t understand super well why it worked, itā€™s still not absorbed as part of my new intuition. So I try to investigate what part of the ideas was I doubtful about, what part was I not confident it, and why my doubts were justified; what was the obstacle preventing the obstacle my initial doubt was worried about.

Letā€™s see a philosophical example. Anselmā€™s ontological argument for the existence of god goes as follows.

ā€œ[Even a] fool, when he hears of ā€¦ a being than which nothing greater can be conceived ā€¦ understands what he hears, and what he understands is in his understanding.ā€¦ And assuredly that, than which nothing greater can be conceived, cannot exist in the understanding alone. For suppose it exists in the understanding alone: then it can be conceived to exist in reality; which is greater.ā€¦ Therefore, if that, than which nothing greater can be conceived, exists in the understanding alone, the very being, than which nothing greater can be conceived, is one, than which a greater can be conceived. But obviously this is impossible. Hence, there is no doubt that there exists a being, than which nothing greater can be conceived, and it exists both in the understanding and in reality.ā€ St. Anselm, Archbishop of Canterbury (1033-1099).

A bullet-point version of the argument in modern English (that I am taking literally from the Internet Encyclopedia of Philosophy) is:

  1. It is a conceptual truth (or, so to speak, true by definition) that God is a being than which none greater can be imagined (that is, the greatest possible being that can be imagined).
  2. God exists as an idea in the mind.
  3. A being that exists as an idea in the mind and in reality is, other things being equal, greater than a being that exists only as an idea in the mind.
  4. Thus, if God exists only as an idea in the mind, then we can imagine something that is greater than God (that is, a greatest possible being that does exist).
  5. But we cannot imagine something that is greater than God (for it is a contradiction to suppose that we can imagine a being greater than the greatest possible being that can be imagined.)
  6. Therefore, God exists.

The first criticism I read against this argument came from a monk contemporary to Anselm: Gaunilo of Marmoutier. He basically posited that the same argument could be used to prove the existence of a perfect Island, as a perfect Island that exists in the real world is more perfect than one that exists only in the mind, and therefore the perfect Island must also exist in the real world.

I made a 3-year long intellectual mistake by thinking that Anselmā€™s idea was destroyed by this counter. But the thing is, the ā€œpislandā€ (short for perfect island, one of my favorite philosophy-lingo) argument, is more like a gotcha, like an issue that has been raised, but it doesnā€™t give you good intuition on why Anslemā€™s argument is wrong, just on a way of exposing a potential failure mode in the argument.

The issue is actually much more complicated than this, and one really needs to stay after the credits to see it. I donā€™t think the failure in the argument was understood until Kant introduced the Copernican Revolution of philosophy, 700 years later.

In a nutshell, Kant tackles the argument by going against point 3., his idea being that thinking of existence as a property is a categorical mistake. In other words, the universe is not made out of objects that have a self.exists = True or self.exists = False property; existence is not a property, but a precondition for the instantiation of properties into a particular object; and properties themselves are representational concepts rather than real things. There are real things, exactly as they are, and then their properties are in our mental representations of them. Redness, as a property is only a conceptual representation; (over-simplifying) there are red things, which are things whose reality makes the ā€œrednessā€ mental representation toggle on in our brains. For a statistical example, even if your data looks like the following image:

clusters do not exist in the data, only in our conceptual representation of the data. Existence, however, is not even a candidate for a conceptual representation that real objects trigger, but rather a precondition for any conceptual representation. Talking about existence as a property is a type error, so to say.

However, even this is not enough! Because Anselm actually had a second formulation of the ontological argument, which uses the property of necessary existence rather than existence, which can be formalized as an actual property. Using necessary existence, Kurt Gƶdel, the greatest logician, came up with a formal proof in modal logic of godā€™s existence based on a refined version of Anselmā€™s second ontological argument. Funnily enough, Gƶdelā€™s argument can be verified by a computer given its formal nature, and it turns out to be consistent! (that is, if one accepts its axioms, the conclusion follows!). To the best of my knowledge, the implication of modal collapse, and the axioms themselves, are still a matter of research.

By finishing too quickly with the pisland argument, I missed on a lot more about the subject.

Stealing as much as possible

The following is a picture of the magnificent Las Meninas, by Diego VelĆ”zquez. Itā€™s one of the most studied paintings of all time, a true masterpiece (thereā€™s a lot to say about this painting, but I wonā€™t go into it. Luckily for you, people with infinitely more knowledge about art and art history have extensively written about it. If youā€™re into Foucault, he wrote some veryā€¦ Foucaultian things about it, itā€™s quite interesting),

But now take a look at this.

This is Picassoā€™s 1957 version of Las Meninas. Truly fantastic stuff. So the thing is, Picasso once said jokingly: Good artists copy, great artists steal. This is a phrase that needs to be taken very carefully to make sense of it. Itā€™s not about defending actual intellectual property theft (please donā€™t do that ā€” and double please donā€™t do that saying I told you to do that), but rather about the way great artists relate to their inspirations. Stealing is a way of saying appropriating fully, it is a way of saying that you donā€™t owe any faithfulness to your inspirations, only credit. They did it first, and that needs to be acknowledged, but you can do with it whatever you want, transform it at your wish, use it for all kinds of silly stuff. Copying has a connotation of sameness that does not allow for repurposing; copying means using the same solution for the same problem, but it is when you steal someoneā€™s toolkit that you can use it in all kinds of new problems.

A concept I really like about good science is that it should not only contain results, but also ā€œreusable brain stuffā€ (RBS), RBS is what you can steal from the paper and use, probably with tweaks, to solve other new problems. Being mindful of the reusable brain stuff in other peopleā€™s work has really changed my relation to the material I read; now Iā€™m constantly on the look for what can I steal. Richard Feynman famously said that he kept around a dozen math or physics problems that he wanted to solve memorized in his head, and whenever he was going to a talk that showed a new math trick, or read a new paper, he would iterate over the dozen problems in his head asking the question ā€œcan this trick Iā€™m learning now help me solve this problem?ā€.

Paraphrasing Pacheco, the Mexican writer, coming up with the equation is only half of the way, and the other half is done by the reader when they use it. By using them, you become in some sense their author, who invents them again in the use.


Thanks for staying here.

P.S. The excuse for this post can be said to be the following tweet, by JƩrƩmy Barbay, in response to my public declaration of a renewed love for mathematics.

So thanks JƩrƩmy for the encouragement! and I hope this reflection can be helpful, or interesting, to someone out there. If it is interesting to you, reader, please let me know what you think!

Or at least within the first page of Google search. Iā€™d be happy to find other translations if anyone points them out to me.

Thereā€™s at least one reasonable way I can think for my argument to be wrong; imagine a world where understanding is fundamentally obtained through quantum leaps that cannot be broken down in intermediate steps. In this world Iā€™m wrong, and I donā€™t see a simple way to prove (nor disprove for that matter) our world is not that world, but Iā€™m pretty confident in this argument being reasonably inferred by Bayesian updating over my inner mental experience. Iā€™d be more than happy to update these beliefs, or simply discuss them further, with the interested reader. It is also worth clarifying that this world model does not mean that our brain works in some sort of continuous fashion over a manifold of possible mental states; far from it, I mean that the resolution of our possible mental states seems good enough to abstractly model reasoning as a continuous phenomenon, similarly to how at human-scale physics, itā€™s reasonable to model position or speed as continuous variables, regardless of whether they actually only admit a discrete number of quantum states they can collapse into, simply because that is happening way beyond our resolution of interest. More in general, what can continuity possibly be if not the limiting idea (i.e., a purely mental construction) of discreteness being small enough to be abstracted away?