Subtitles section Play video Print subtitles Alright, welcome to category theory lectures So, are we all programmers here? Is everyone a programmer, or are there people who are not programmers? If you're not a programmer, please raise your hand Okay, thats means, like, you don't write any programs? > I kind of learn as a hobby Okay well that's good enough okay and how many people here have some little bit of knowledge of Haskell, okay Oh Lots, that's, that's, excellent because I might be giving some examples mostly just like you know, declarations, functions or something like that I'll explain everything but it's it's good to have a little bit of understanding. So I'm not really teaching a programming language, it will be about category theory and category theory is like this most abstract, or well maybe almost the most abstract part of mathematics, right so the question is why are we here? what does it have to do with programming, right? and a lot of programmers they hate math they finished learning math at college and now they are really happy now "for the rest of my life I will not have to touch it" "I hate math" and so on, and you guys here come for, I dunno, punishment? Why do you think category theory might be important? What do you think, do we do functional programming? Is category theory only relevant to functional programming or other branches of programming would maybe profit from it too? is there something more to offer? Have you heard of functors, who's heard of functors? yeah, and who knows what functors are? Uh huh, a few. That's good, that means i can actually explain stuff and you won't be totally repeating what you already know so I came to category theory through a long long twisted road ok when I started programming I started programming in assembly language the lowest possible level, right, where you actually tell the computer exactly what to do right down to "grab this thing from memory, put it into the register, use it as an "address and then jump" and so on so this is very precisely telling the computer what to do right this is this is a very imperative way of programming we start with this most imperative approach to programming and then sort of we drag this this approach to programming throughout our lives right and like we have to unlearn at some point. And this approach to programming sort of in computer science is related to Turing machines. A Turing machine is this kind of very primitive machine, it just stamps stuff on a paper tape, right there is no high-level programming there it's just like this is the assembly language "read this number, put it back on tape, erase something from the tape" and so on so this is this one approach towards programming by the way, all these approaches to programming were invented before there were even computers, right and then there's the other approach to programming this came from mathematics the lambda calculus right, Alonzo Church and these guys and that was like "what is possible to compute, right, "thinking about mathematics in terms of "how things can be actually executed in some way, transforming things", right so these approaches to programming are not very practical although people write programs in assembly language and it's possible but they don't really scale, so this is why we came out with languages that offer higher levels of abstraction, and so the next level abstraction was procedural programming what's characteristic of procedural programming is that you divide a big problem into procedures and each procedure has its name, has a certain number of arguments maybe it returns a value sometimes right not necessarily, maybe it's just for side effects and so on, but because you chop up your work into smaller pieces and you can like deal with bigger problems right so this this kind of abstracting of things really helped in in programming, right and then next people came up with this idea of object-oriented programming right and that's even more abstract now you have stuff that you are hiding inside objects and then you can compose these objects right and once you program an object you don't have to look inside the object you can forget about the implementation right and and and just look at the surface of the object which is the interface and then you can combine these objects without looking inside and you know you have the bigger picture and then you combine them into bigger objects and so you can see that there is a a certain idea there, right? and so it's a very important idea that if you want to deal with more complex problems you have to be able to chop the bigger problem into smaller problems, right, solve them separately and then combine the solutions together And there is a name for this, this is called composability, right. So composability is something that really helps us in programming. What else helps us in programming? Abstraction, "abstraction" that comes from from a Greek word that means more or less the same as "subtraction", right which means "getting rid of details". If you want to hide details, you don't want them or you wanna say "these things, they differ in some small details but for me they are the same" "I don't care about the details" so, an object is in object-oriented programming is something that hides the details, abstracts over some details right, and there are even these abstract data types that just expose the interface and you're not supposed to know how they are implemented right? so when I first learned object-oriented programming I thought "this is like the best thing since sliced bread" and I was a big proponent of object-oriented programming and together with this idea of abstracting things and and and composing things comes the idea of reusabillity right so if i have these blocks that I have chopped up and implemented, right, maybe I can rearrange them in different ways so once I implemented something maybe I will use it in another problem to solve another problem I will have these building blocks, I will have lots of building blocks that hide their complexity and I will just juggle them and put them in new constellations, right? so it seemed to me like this is really the promise of object-oriented programming, I'm buying it! and I started programming object-oriented way using C++ and I became pretty good at C++ I think, you know, I wrote a lot of C++ code and well it turns out that there is something wrong with this object-oriented approach and it became more and more painfully obvious when people started writing concurrent code and parallel code, ok, so concurrency does not mix very well with object-oriented programming. Why doesn't it? Because objects hide implementation and they hide exactly the wrong thing, which makes them not composable, ok? They hide two things that are very important: They hide mutation â€“ they mutate some state inside, right? And we don't know about it, they hide it They don't say "I'm mutating something". And sharing these pointers right â€“ they share data and they often share data between each other you know between themselves they share data And mixing, sharing and mutation has a name It's called a data race, ok? So what the objects in object-oriented programming are abstracting over is the data races and you are not supposed to abstract over data races because then when you start combining these objects you get data races for free, right. and it turns out that for some reason we don't like data races, ok? and so once you realize that you think "okay, I know how to avoid data races, right, I'm going I'm going to use locks "and I'm going to hide locks too because I want to abstract over it" so like in java where every object has its own lock, right? and unfortunately locks don't compose either, right. That's the problem with locks. I'm not going to talk about this too much because that is like a different course about concurrent programming but I'm just mentioning it that this kind of raising the levels of abstraction you have to be careful what you're abstracting over what are the things that you are subtracting, throwing away and not exposing, right? So the next level of abstraction that came after that, well actually it came before that but people realised that, "Hey, maybe we have to dig it out "and start using this functional programming" when you abstract things into functions and especially in Haskell when, you know, in a functional language you don't have mutations so you don't have this problem of hiding data races, and then you also have ways of composing data structures into bigger data structures and that's also because everything is immutable so you can safely compose and decompose things. Now every time I learned a new language I wanted to find where the boundaries of this language are like, "what are the hardest things to do in this language?", right? So for instance in C++, right, what are the highest levels of abstraction that you can get? Template metaprogramming, right? So I was really fascinated by template metaprogramming and I started reading these books about template metaprogramming and was like "Wow, I would have never come up with these ideas, they are so complex", right? So it turns out that these are very simple ideas it's just that the language is so awkward in expressing them So I learned a little bit of Haskell and I said "Ok this huge template that was so complicated, that's one line of code in Haskell", right? So there are languages in which there was like a jump in the level of abstraction and made it much easier to program at a higher level of abstraction, right. And in every language you know there is this group of people who are writing libraries right and they always stretch the language they always go to the highest possible abstraction level and they and they hack, right? They hack at it as much as possible. So I thought "Okay I don't like hacking, I just wanted to "use a language that allows me to express myself at a high level of "abstraction and that lets me express new ideas that are much more interesting" you know, like, with template metaprogramming right you express this idea that you might have lots of data structures that only differ by the type that they hide. Like you can a vector of integers and vector of booleans right? And there's just so much code to share, so if you abstract over the data type that you are storing there, if you forget about it, hide it, abstract over it, you can write code, abstract code, and in C++ you do this with templates right. And you get something new, you program at a higher level â€“ a higher abstraction level because you disregard some of the details, so that was great. Now it turns out that once I learned Haskell (I'm still learning Haskell to some extent) I found out there are things in Haskell that are at these boundaries of abstraction that, like, there are people who are working on this frontier of Haskell, right? There are certain very abstract things that are unfortunately a little bit awkward to express in Haskell and then there is this barrier to abstraction even in Haskell right? I mean if you've seen some libraries that were written by Edward Kmett you realise that they are extremely hard to understand what was the thought process, right? And the secret is very simple â€“ Category Theory. Edward Kmett knows Category Theory, and he just takes this stuff from Category Theory, he reads these mathematical papers and he just translates them into Haskell and when you translate stuff from Category theory to Haskell you lose a lot of abstraction, you make it more concrete Haskell has one category built-in and you are restricting yourself to this particular category. You can create other categories in Haskell and model them, but that becomes awkward and that becomes sort of like template metaprogramming you know within Haskell â€“ not exactly the same mechanism but the the level of difficulty in expressing these ideas in Haskell is as big as template metaprogramming in C++. So Category Theory is this higher-level language above Haskell, above functional programming, above ML, Haskell, Scala, and so on C++, assembly, it's a higher-level language, okay? It's not a practical language that we can write code in, but it's a language. So just like people who write these horrible metaprogramming template libraries in C++ they can only do this because they learned a little bit of Haskell. and they know what some constructs in Haskell are and how to do things â€“ how to implement algorithms on immutable data structures right they know how to do this because they learned it from Haskell and they just translated into this horrible template programming language. Fine right? And it works and people are using it, the same way that if you're a functional programmer you can take these ideas from category theory, these very very abstract ideas and translate it into this kind of awkward language called Haskell, right? Now from looking from from categorical perspective Haskell becomes this ugly language just like looking from Haskell C++ becomes this ugly language, right? But at least you know it's an executable language, its a language in which we can write programs. And of course these ideas when they percolate from category theory down to Haskell they can also percolate then down to C++ and even to assembly, PHP or whatever, JavaScript if you want to because these are very general ideas So we want to get to this highest possible level of abstraction to help us express ideas that later can be translated into programs. So that for me is the main practical motivation ok? But then I started also thinking about the theoretical motivation or more philosophical motivation because once you start learning Category Theory you say "Okay, Category Theory unifies lots of things". I mean if you abstract enough, if you chop up all the unnecessary stuff and you are you know, top of Mount Everest and you're looking down and suddenly all these things start looking similar, like from that high-level all these little programming languages look like little ants and they behave same way and its like "Ok, they're all the same". At that level of abstraction all this programming stuff, it's all the same, it looks the same. But that's not all â€“ suddenly at this high, high level other things look the same and then mathematicians discovered this, that they've been developing separate areas of mathematics, right? So first of all they developed geometry, algebra, number theory, topology, what have you, set theory these are all completely different separate theories, they look nothing like each other, right and category theory found out similarities between all these things. So it turns out that at a certain level of abstraction, the structure of all these theories is the same. So you can describe, you know, you tweak with the structure of a category and you suddenly get topology, you tweak this structure of category, you suddenly get set theory, you tweak something else and you get algebra. So there is this unification, this grand unification, of, essentially, all areas of mathematics in category theory. But then there is also programming, right? Programming that deals with these computers with this memory and processor or registers, ok? And this stuff can be described also, and then there's a programming language, this stuff can be described mathematically, yes, lambda calculus Like all these languages that essentially have roots in lambda calculus they try to get away from bits and bytes and gotos and jumps, right and loops, and they want to abstract this stuff into stuff that's more like lambda calculus, right and there are these data structures and these data structures you know, we used to look at them like "here's a bunch of bytes, "here's a bunch of bytes, here's a pointer from this bunch of bytes to this bunch "of bytes" and so on, right? The very very low level, like plumbers working with tubes, right? And then computer scientists say "Well actually these things, they form types" So there's type theory, there's type theory that can describe all these categories of data structures, but there's also type theory in mathematics, right, people invented types in math not to solve problems that