C# as a language is a fairly old - twenty-three
years and counting - and so is .NET. With so much baggage we carry from the old
days and with so much background noise, it is no wonder many C# programmers are confused
by some of the features and even entire libraries. In this video, I will draw your attention
to three features of C# and .NET, which some programmers never use. Not
because C# is bad, but because there are so many C# programmers who fail to understand it.
That is tragic. It needs not be that way. So, stay with me, hear me out.
Misconception number 1: Records are useful as DTOs and nothing more.
I hear that a lot. I mean a lot. But that is blatantly wrong, and I will now show you why.
This is a record. This is a DTO. You can use them interchangeably in most scenarios where you
need a DTO, so yes, a record can be used as a DTO. A common DTO is a tad more manageable if the
accent is on serialization and deserialization. But that is where all further
similarities end because records have a few other features that DTOs don't have.
A record is immutable by default, which means its hash code is derived from the values it contains.
It also comes with value-typed semantics. DTOs have reference equality.
I am talking about the default behavior of both - you can override all
this, but that would require effort. Immutability and value-typed semantics
mean you can use a record as a key, while you cannot use a DTO that way.
Being a value, a record will not contain an ID. That is what we put into
entities to track their lifetime. A record can have some key value, but it
would not understand it as its own key. Same with a DTO: if it transfers an entity,
then it would naturally contain the entity's ID. Last but not least, you will avoid placing
collections onto a record. That would violate the record's value-typed semantics.
That constraint is not relevant to DTOs. The bottom line is that records are much more
than DTOs. Anyone who still believes that records are only DTOs needs more education.
If you wish to learn more about .NET and C# programming, then subscribe to my channel.
There are many videos where I am explaining how C# really works and should be used.
You can also support my channel by becoming a sponsor. Visit my Patreon page and
join the growing community there. That will give you access to more
resources, discussions, and trainings. You can also join my Discord channel, where you
can take part in discussions and learn more. Now, to the misconception number 2: LINQ is slow.
I hear that a lot, too. And I hear it from two kinds of people: those who never
measured performance and those who don't know what performance means.
Let's clarify where the performance of LINQ comes from.
LINQ applies to sequences, but collections are also sequences.
There is, however, a gross difference between a ready-made collection of objects and
a lazy-evaluated sequence of those same objects. You need to know how iterating
through a lazy-evaluated sequence differs from iterating through a collection.
Here, I am instantiating many relatively large objects and storing them into a collection.
The size of this collection is by one item over 8 megabytes, and that will cause issues
on a CPU with an eight-megabyte-large cache. The last object will cause
eviction of the first one. Now, I iterate through the objects, paying for
the first one to be brought back from memory. That will evict another poor item, which I will
have to wait for again once iteration comes to it. You get the picture.
I am destroying performance by fetching all the data up-front.
With LINQ and a lazy evaluated sequence, we create an object, process it, and
leave it to the garbage collector. Data locality is improved, giving an excellent
opportunity for the CPU cache to shine. Hence the rule: Measure
first, make conclusions later. You might be surprised to see how fast some LINQ
queries are when you compare them to alternatives. The second half of the story is to know
what it means to say something is slow. Where do the data come from?
Where do the results go? Take the data from the database and
wait at least a millisecond for them. Process the data either way -
that might take ten microseconds. Then, send them across the Pacific, adding at
least some 50 milliseconds at the speed of light. Let's put the data at the back of the envelope.
Here is the timeline of a Web response. I have a message for anyone who would invest
in optimizing 0.02% of the execution time: Go for it and never look back!
That will slow them down. I will repeat this: Don't be ignorant. Everyone
must learn programming before doing programming. Subscribe to my channel and learn from other
videos I have made about C# and .NET programming. And again, join me on Patreon and Discord
for more discussions about C# programming. You can ask questions there.
Related to the previous examination on LINQ, there comes the third misconception I hear almost daily.
Some programmers are very particular when they say: I won't use immutable
design because it is slow. Spoiler alert: I will now tell you the root
cause of why some programmers abstain from using immutable design. That is their way to
hide the fact that they have no clue about functional programming.
So, let me explain. In an immutable model, you cannot
change an object after instantiation. Instead, you create a new object with all its
state copied except what you wish to change. That is called non-destructive mutation.
Is this a slow process? No. In fact, it is as fast as calling a function.
It is so because we don't really copy any state. We copy references to other objects,
which is as cheap as passing those references to function parameters.
We don't avoid calling methods on the grounds that it will slow our program down.
Some say that this process, if not slow, is costly as it creates many short-living objects
that require expensive garbage collection. That is another misconception, this time stemming
from not knowing how garbage collection works. It takes time to deal with the
living objects, not the dead ones. The garbage collector follows references
to discover the living objects, then does a potentially costly memory-to-memory copying to
relocate them near the beginning of the heap. Guess what: Garbage collector never touches a
short-lived object because it is unreachable. Deleting all, and I will emphasize, all
short-living objects from memory is as costly as changing the value of the end of the heap
pointer — a single memory location assignment. And that brings us to the very beginning of
this topic: Is immutable design slow? No, it is not when done right.
Oh, when done right. Alright, that is the part
which I kept for the end. Once you master immutable design,
you will attain other benefits. A predictable state will help you
write expressive code, reducing the number of bugs by an order of magnitude.
Immutability will promote composable objects and composable methods, helping write way shorter and
more flexible code than the mutable counterpart. It all takes knowledge and
education. Let's face it. I will tell you this straight. Suppose that
you are avoiding functional design in .NET out of fear of low performance. If, and when,
I hear that, then I know something about you. I know you have never seen how functional
designs perform, nor have you seen what comes packed together with them, such as a wonderful
expansion of your design capabilities when you use functional designs.
Did I poke you in the eye? Good if I have.
I would rather see all C# and .NET programmers start appreciating
the functional capabilities of C#. You can learn a lot about those
capabilities on my channel. Subscribe and start by watching
this informative video first.