This essay says that inheritance is harmful and if possible you should “ban inheritance completely”. You see these arguments a lot, as well as things like “prefer composition to inheritance”. A lot of these arguments argue that in practice inheritance has problems. But they don’t preclude inheritance working in another context, maybe with a better language syntax. And it doesn’t explain why inheritance became so popular in the first place. I want to explore what’s fundamentally challenging about inheritance and why we all use it anyway.
🙄 people who blindly say “inheritance is bad” or “composition over inheritance” and mean “no inheritance” are just like followers of religion. Always repeating the same stuff without thinking and being completely convinced they are right. Nigh everything has a place and a valid usecase and just because you don’t see it doesn’t mean nobody else does.
Edit: Also “sum types” and “algebraic data types” are horrible names. Pretty much the equivalent of “imaginary numbers”. What the fuck is a “sum type”? How do you “add” types together? Adding numbers makes sense, it has a real world equivalent. Two balls in a cup, add one ball and you have three balls in a cup. Add color to water and you have colored water. Simple. But types? The fuck?
str | int
is a sum type --> does that mean sum types are adding two spaces of possibilities together aka a union of two sets? The wikipedia article is so friggin bad at explaining it to people who aren’t in the know.@onlinepersona @armchair_progamer
A type has a number of ‘inhabitants’. ‘Sum’ indeed corresponds to adding the possible inhabitants together.
A Boolean has two inhabitants - true and false. A byte has 256 inhabitants. A BoolOrByte type has 258 inhabitants.
If you have BoolByte pair, that’s a product type - 512 possible inhabitants.
It may make no fucking sense depending on your exposure to Java, where Void (literally ‘empty’) has an inhabitant, and Boolean has 5.
OK, that explains
bool | byte
, but how is an enum a sum type? And what’s a product type? A product of sets is a cartesian product. How does that work with types?An enum is a sum type because the number of inhabitants of the enum is the sum of the inhabitants of its parts.
A product type’s number of inhabitants is the product of its parts’ inhabitants. So a struct would fit that definition, or a pair, or a tuple.
Looking at the pic on your Cartesian product link:
if A is an enum {x,y,z} and B is an enum {1,2,3}, then a struct AxB has 9 possible inhabitants.OK, I think I’m getting it.
A product is a set that this is the result of an ordered cartesian products.
struct Car { make: String, model: String, seats: u8, }
Car = String X String x u8
.
An enum is a series of "or"s.
enum Animal { Dog, Cat, Giraffe, Chimpanzee, }
can also be thought of as
Animal = Dog | Cat | Giraffe | Chimpanzee
. WhereDog
is a type that only has single value in its set akaAnimal = {1} | {2} | {3} | {4}
, but it could also be strings, or other objects. Rust however allows more complex objects:enum ComplexEnum { Nothing, Something(u32), LotsOfThings { usual_struct_stuff: bool, blah: String, } }
In this case is
Something(u32)
the equivalent of any “tagged”u32
, meaning in memory it’s something like aTag + 32 bits
whereTag
is a constant string of bits, maybe itself au32
? Wouldn’t that make it a product type?
But thenLotsOfThings
is itself a product typeLotsOfThings = bool x String
.So to put it all together
ComplexEnum = Nothing | TaggedU32 | (bool x String)
? Is that correct?So to put it all together
ComplexEnum = Nothing | TaggedU32 | (bool x String)
? Is that correct?Pretty much, yeah. But just be aware the tags are effectively unique constants, so each has only one value. For consistency I would write it as:
ComplexEnum = Nothing | Something(u32) | LotsOfThings(bool x String)
In this notation,
Something(u32)
could also be written as1 x u32
because tags are constants.OK, so finally I get it. It’s pity none of the blogs I’ve read or wikipedia articles in existence spell it out this way. Instead it’s a bunch of math mumbo jumbo.
Thanks for helping me reach understanding 🙏 And thanks to @Kacarott@feddit.de too.
just like followers of religion
We say that shit, because we’ve touched code that’s deeply inherited, and it was a god-damn pain to work with, because changing a single line can mean you’ll need to update a fuckton more, which breaks tests all god-damn over, which means you may have to refactor 50% of the application in one go.
Anyway, everything has its uses (even goto). It’s just there are typically better alternatives.
“sum types” and “algebraic data types” are horrible names.
Agreed, but they exist due to historic reasons, and now we’re stuck with them. Not much we can do there ¯\_(ツ)_/¯
Pretty much the equivalent of “imaginary numbers”.
Terrible name that just means “vertical number line” (with an added operation where you rotate the vector, instead of add or scale), or “y-axis for the number line”. It’s funny because “Real” numbers are about as real as “Imaginary” numbers. Both are virtual (not physically existing).
str | int
is a sum typeIt just means that the variable can either be a
str
or anint
. You’ve seen|
used as “bitwise or”, right? Think in that direction.PS: Stay away from Monads - they’ll give you an aneurysm. 😂
In some langs like Python,
|
is also the “union” operator, to join sets and such, which I think is more directly related to types, since types are sets of possible values.
The sum and product types follow pretty much the same algebraic laws as natural numbers if you take isomorphism as equality.
Also class inheritance allows adding behaviour to existing classes, so it’s essentially a solution to the expression problem.
Yes, I know some of those words. Could you repeat that for those that aren’t mathematicians or in the know?
As you already figured out the types are sets with a certain number of elements.
Two types are isomorphic if you can write a function that converts all elements of the first one into the elements of the second one and a function which does the reverse. You can then use this as the equality.
The types with the same number of elements are isomorphic, i.e True | False = Left | Right. For example, you can write a function that converts True to Left, False to Right, and a function that does the reverse.
Therefore you essentially only need types 0, 1, 2, 3, …, where type 0 has 0 elements, type 1 has 1 element, etc. and all others are isomorphic to one of these.
Let’s use (*) for the product and (+) for the sum, and letters for generic types. Then you can essentially manipulate types as natural numbers (the same laws hold, associativity, commutativity, identity elements, distributivity).
For example:
2 = 1 + 1 can be interpreted as Bool = True | False
2 * 1 = 2 can be interpreted as (Bool, Unit) = Bool
2 * x = x + x can be interpreted as (Bool, x) = This of x | That of x
o(x) = x + 1 can be interpreted as Option x = Some of x | None
l(x) = o(x * l(x)) = x * l(x) + 1 can be interpreted as List x = Option (x, List x)
l(x) = x * l(x) + 1 = x * (x * l(x) + 1) + 1 = x * x * l(x) + x + 1 = x * x * (l(x) + 1) + x + 1 = x * x * l(x) + x * x + x + 1 so a list is either empty, has 1 element or 2 elements, … (if you keep substituting)
For the expression problem, read this paper: doi:10.1007/BFb0019443
Mathematically, the union of disjoint sets is often called the sum. This is a natural name because when you look at the number of elements (or in general, any measure), it will be the actual numeric sum.
Why the emphasis on “disjoint”? Aren’t integers a subset of floats? Would that mean then that
int | float
is incorrect?In most programming languages, integers are not considered a subset of floats, so when you have the type
Int | Float
, you can distinguish3
from3.0
.
How do you “add” types together? Adding numbers makes sense, it has a real world equivalent. Two balls in a cup, add one ball and you have three balls in a cup. Add color to water and you have colored water. Simple. But types? The fuck?
It makes sense when using some fluent patterns and things like monads. For example:
User user = new User("Bob"); // User Class UserWithPassword user = new User("Bob").WithPassword("Dylan123"); // UserWithPassword Type
A UserWithPassword type would then be a User object wrapper with some
IWithPassword
interfaceThen you could create extension methods on
IWithPassword
objects and decorate those objects with password behaviorYou can then have sort of polymorphic behavior by combining types together, and have different functionality available depending on which types you’ve added together
Saying “X is bad” or “Y over X” is not the same as saying “there is never a place for X”. I think JS is a pretty bad language, and prefer other languages to it, but I still recognise very obvious places where it should be used.
Maybe it depends on the way you understand types, but to me sum and product types are completely intuitive. A type is a set of possible values. A sum type is multiple sets added together (summed).
A type is a set of possible values. A sum type is multiple sets added together (summed).
That makes sense for
str | int
, but how is an enum a “sum type”?As for product types, in set theory a product of sets is a cartesian product. How is a
struct Dog { height: u8 length: u8, name: String, } impl Dog { fn bark() { println!("woof!"); } }
a product? What is it a product of? And why is the type itself a product, not
Dog x Cat
? Or isDog x Cat
indeed some kind of product that I’m not aware of but with another syntax?Well what is an enum except a chain of
X | Y | Z | ...
. An enum can be any of its variants, and therefore the set of its possible values are just all possibilities of its variants added together.Consider this enum:
enum Foo { A, B(bool), }
The possible values for
A
are just one:A
. The possible values forB
areB( true )
andB( false )
. So the total possible values forFoo
are simply these sets combined:A
orB( true )
orB( false )
.As for product types, what it is the product is, is still the same: the sets of possible values. Consider the possible values for the product of
A
andB
. For every possible value ofA
, a value could be made by matching it with any possible value ofB
(so, multiplication). If there are 3 possible values ofA
, and two possible values ofB
, then the total number of possible combinations of these for the product type is 6.In your example,
Dog
is a product ofu8
, anotheru8
, andString
. If you decide to add a Boolean field to this type, accordingly the size of the set of options would double, because for every possibleDog
you currently have, two possibilities would be created, one with atrue
and one with afalse
.As for your last question, some languages might use
x
as a product type syntax, but because tuples and structs are inherently product types, most languages use those as Syntax. For example in Haskell the product type ofDog
andCat
would be written as(Dog, Cat)
.
Saying “X is bad” or “Y over X” is not the same as saying “there is never a place for X”.
That rarely comes across online where opinions are often stated dichotomously. Especially when speaking of inheritance, some crowds (I’ve noticed this in the Rust crowd) are vehemently against it and will do nigh anything not to think of a problem as one of inheritance nor a solution that could benefit from inheritance. The makers of the servo browser engine which has to implement hierarchical structures (the DOM) ran against this exact issue within the Rust community where inheritance might as well equate to blasphemy.
I recognise that it’s probably a loud, zealous minority, but it makes interacting with the rest of the community rather difficult.
Obviously, it’s not absolutely bad. I think the reason why it became very popular was because it allows you to reuse and extend functionality that already exists. This gives you a higher development velocity, which is obviously something that companies prioritize, and developers want to provide.
Right, that’s the third meaning he cites for inheritance.
Was half asleep when I wrote this lol
It does frustrate me that people say “composition over inheritance” and then act like the problem is solved.
For a start, dependency injection is an abomination. Possibly the single worst programming paradigm in the history of software development.
The only codebases I’ve seen that deliver on some of the promises of composition have been entity-component systems, but this concept hasn’t even matured to the point of being a first-class language construct yet, so the programming style is pretty janky and verbose.
dependency injection is an abomination
I don’t think so, dependency injection has made testing easier in all static typed code bases I worked on.
The general idea is to not test things that need to be mocked, keep all logic out of those and keep them at the edges of the application. Then you test the remaining 90% of your code which would ideally be pure functions that don’t need mocks .
The remaining 5% can be integration tests which doesn’t care about DI.
In reality this doesn’t always work, because sometimes it’s better to write a complex query with logic and let your database do the processing.
How does DI make testing easier, or the lack of it make it harder?
Having a framework scan your code tree for magic annotation and configurations, etc, in order to instantiate “beans” or what not, is the worst imaginable tradeoff I’ve ever seen in software development. Calling it an abomination sounds exactly right.
It’s an overloaded term:
“Dependency inversion” is a language-agnostic technique for producing testable, loosely-coupled software.
“Dependency injection” just means dependencies should be passed in through the constructor, instead of being magically new()'d whereever.
“DI frameworks” are Satan’s farts. Classpath-scanning nonsense that turns compile-time errors into runtime errors. Not only is your Ctr still coupled to your Svc, but both are now coupled to Spring.
“DI frameworks” are Satan’s farts. Classpath-scanning nonsense that turns compile-time errors into runtime errors. Not only is your Ctr still coupled to your Svc, but both are now coupled to Spring.
Let’s qualify this: DI frameworks that use configuration files to specify the dependencies are Satan’s farts. You can use the DI pattern but do the injection in source code to get full compile-time type checking and all the benefits of a fully packaged JAR or similar binary. The root of the evil is having text files as part of the packaging, and not having the tooling to treat those files as part of the source code at build time.
I see. Thanks! Looking at wikipedia, it sure seems to be a case of an overloaded term.
I don’t generally use the first and second one you mention, because it’s in the “common sense / obvious” realm, and I don’t like to use complicated sounding jargon for simple concepts. For example:
// Good: MyClass(Resource(configValue)) // Bad: MyClass(configValue) // which insides does something a'la resource = Resource(configValue))
If this is an example of dependency inversion… hm, I suppose. I suppose having a word for it is fine. “Inversion” also doesn’t make much sense if you start out thinking that’s what makes sense to begin with. As for the “dependency injection”? Sounds complicated, but really isn’t. Not sure who these kinds of terms are supposed to help.
As for DI frameworks using the word injection. I’ve always thought made sense. Because the connotations of injections feel applicable. No one looks forward to “an injection”, and aside from the obvious, its usually done by someone else, and you might not be aware of the details of what happened, and you have no good way of figuring out what it was, or why you suddenly start feeling weird.
That quote from Meyer about contravariance not being “useful” is either a misreading of the cited source (the closest thing to a direct quote is about C++'s lack of variance) or, more charitably, a massive oversimplification. I don’t know Eiffel, but from Meyer’s speech, it sounds like Eiffel lets you choose between something like covariance and something like contravariance for each method. Unfortunately, the second link provided for context seems to be broken.
deleted by creator