- My name is Andrew Sutton. I work at the University of Akron. My talk today is titled Concepts, everything you need to know about concepts and nothing you don't. So, a funny thing happened,
I had planned to just come in and talk about the language
features in the Concepts TS, and then I sat through Bjarne's keynote and listened to the questions afterwards and realized that I couldn't
just give that talk. So anyways, I don't usually do this but this is the abstract of my talk, you guys can read it
very quickly if you want. Normally that would be a
hidden slide. (chuckles) But the point is that I
was supposed to talk about everything in the Concepts TS, and I realized that I don't
think I can actually do that. It's kind of big and I
kind of get the feeling that a lot of people are missing out on some very fundamental aspects about what generic programming is, and the role that concepts play in that. And of course, we have this problem. It is simply not possible to
present concepts in 60 minutes, it is actually too big of a feature. There are really interesting
language features, there are really interesting problems. So I had to scale my
talk back a little bit and focus what I think is
on more what I think to be, believe to be more fundamental issues, so really we're gonna go
with something like this. Generic programming from
the ground up using concepts but not everything in
concepts, not even close. I think that's gonna be a more
realistic expectation here. So I did leave this slide in
because I thought it was good. These are the things I think that people need to know about context, or concepts, not context. Generic programming is a good
thing to know if you're not, if you're using concepts but don't think you're
writing generically, I worry for your sanity. What concepts are and where to find them. I'm missing my, of course,
Harry Potter reference here. How do you write constrained templates, that's gonna be primarily
the focus of this talk today. How you actually define
concepts is really interesting and worthy of at least an
hour-long talk on its own, on its own right. What makes a concept good
goes into part of that, and then what makes a generic library good falls out of that too. And both of those are
really complicated topics that are certainly
worthy of a lot of time, especially to be given by somebody who's actually done this work before, and I think there are
only a handful of people in the world that have, and one of them is
sitting about midway back. Eric Niebler over there, I see you. So really that's the talk today. (audience laughs) And there's 115 slides, so. (laughs) We'll see how this goes. So in addition to what you need to know, I think it's also important to know what you don't need to know. These are things you don't
need to know for this talk. You do not need to know about template
metaprogramming for this talk. In fact, I can give the entire run on most of the other talks
without talking about what passes for template
metaprogramming today. You do not need to know about SFINAE. That is the last time you
will hear me use that word. You do not need to know about type theory. This is good. You don't need to know
about category theory. This is better. It's not that you shouldn't
know about these things, it's just that you don't
necessarily need them to write a good generic library. That's not where this stuff comes from. So there's two
implementations of concepts, this is also something good to know. The GCC, the official
version of GCC since six has had an implementation of the TS. There have been a flurry of
bug fixes lately, that's nice. There's a Clang implementation by Saar Raz that I haven't looked at yet. I believe it influenced
most of the working paper. I hear it's in good shape. I'm getting nods. And I'll just go ahead and throw this out, there is also a second
implementation of GCC residing on GitHub. Matt, now's your chance. (laughs) Sorry, Matt Godbolt. So there's a second
implementation of concepts in GCC, it's not merged in the trunk at all, it's kind of separate at the moment. Concepts turned on with -std=c++2a, if you want your old
Concepts TS features back like the terse notation,
template introducers, a nice requires clause grammar, you can throw on -fconcepts-ts
and get all that stuff back. And also, of course, now
means now with more bugs, because it is not a
small language feature, it touches virtually
everything in the language. So, of course. Anyways, so generic programming
and libraries, nice title. So the whole point of
having generic libraries, I think really the whole point
of having generic libraries is to basically reduce redundant code by writing our algorithms and
data structures abstractly. That should be the point. We don't like to keep
repeating the same thing over and over and over again. Like Bjarne alluded to this in his keynote when he talked about using macros, which didn't work out too well. So the result, we end
up with our templates in generic libraries in C++, and they're actually quite good. They're very general. You can pick standard libraries and use them with just about anything, and they're very fast, like
fastest-in-the-world fast. And so these are the things that we actually like to get
out of generic libraries. Question is how do we do it? So the talk really goes in two parts, first I'm gonna talk about generalization, then I'm gonna talk about performance. It's not a deep dive in performance, it's just performance
with relation to concepts. So how do we generalize
or genericize code? There's kind of two ways you can do this. The first and kind of
poor man's generalization, I guess poor man's generalization, I really like this feature,
is overloading, right? Overloading actually just lets us extend the domain of a function. You can take an algorithm
written for ints, you can make another one
written for unsigned ints, you can take a third one
written for doubles, whatever. It is general from the user's perspective, it's not general from the
implementation perspective, it's general from the user's perspective. I have my types, I call that
name, it does what I expect. That is in some way general,
generic in some way, right? And you kind of get this
with operator overloading, you kinda get this with
like different flavors of sine and cosine, you get it with, I think probably the most
recent version of this, we have the Ranges TS that
overloads standard algorithms for ranges instead of for iterator pairs, which is a nice thing to do. We're just horizontally extending the domain of those functions. It's not real generalization. So we're gonna walk through a bunch of concrete algorithms for this. So here's my implementation
of min_element, I really hope I got it right. It seems okay. I may have stolen it from GCC, but I actually ended up
using concept or contract, so we have an assertion
in here that's an axiom that our input range is
expected to be bounded, meaning that I can apply ++ to first and I eventually get to last, and I can dereference every
element in that set of iterators except the last one. And the reason it's an axiom is because you cannot actually
implement this function. I challenge you to try. It will not work. And so if we want to extend this domain for another set of types, we
can just go ahead and copy it and paste it and do like a little rename, and we get this to work for
a sequence of doubles, right? Maybe. Not quite, right? So it turns out that we
actually have a small problem, which is with doubles
that if we compare them we might not actually
get defined behavior. Floating point numbers have not a number, if you try to find the
min_element of a sequence without a number in it, especially if that first element is NaN, you will not find the
minimum of that sequence. What happened? You'll get NaN. So we probably want
another condition up here, we'll just go ahead and assert there's no NaNs in this sequence, now we can actually validly
compute the algorithm, the minimum of a set sequence, all right? So let's do it one more, oh, here's that algorithm
for completeness, just so you can write it. Let's do it one more time. This one is a bit odd, but I like it. So we're going to find the minimum in an array of pointers to
integers, the least pointer. Does it make sense to do this? It might. But there's some serious
preconditions on this, right? Like, that is iffy. This is not a very well-defined relation, it's not a very, it's a sparse
relation, let's say that. So we actually have something
that we wanna do like this. Every value has to be ordered. If we can actually figure out that every element in that range satisfies the trichotomy law, which is something is either less than, it's greater than, or it's equal. If all of those values compare, then we can in fact find
the minimum point of this, and we can implement that algorithm too. But it's, of course, N squared,
so we don't really want to, we wanna audit that, you
don't wanna run it on default, that's pretty slow, an N
squared algorithm into a, or an N algorithm into N squared. So we'll just go ahead and
put that there, that's nice. Oh sorry, we're gonna, we can actually take that precondition, ah, that wasn't supposed to happen. What did you do? Wrong button. Computers are hard. There we go. So we'll just go ahead
and take this precondition and propagate that back
to our max for doubles. That works out just fine. That precondition is basically equivalent, it's just more expensive to compute, but I like it because
now all of our algorithms start to look the same,
and then we'll go ahead and we'll propagate that
back to all ordered, because again, it's
syntactically the same, even though the cost is not
worth computing in this case. That should just return true
for ints because it does. So, we've basically
taken a single algorithm, we extended it a bunch of times, now we have different domains, and we figured out like all of this fits the same syntactic pattern. This is now something that we can actually take advantage of, we can make it generic, right? And so this is really where generic programming comes from. We've taken a bunch of concrete
examples of real algorithms, we figured out that we can
write them roughly equivalently, syntactically equivalently, and then we should be able to lift them into their more abstract and
more general form, right? So identify the common
operations of your algorithm, replace those concrete
types with placeholders, and hypothetically you have
what is a generic algorithm. You guys all agree? Now of course the other side of this, when you look at these
algorithms here like this one, we can make it like, we
can adapt types into this because we can overload operators, right? It doesn't just work for built-in types with star and equal and whatever, we can actually make
types conform to that. So we don't have to worry
too much about trying to, we don't necessarily have to worry about trying to form-fit every
algorithm to every possible type. Pick a syntax, adapt to
the syntax as you need to. Operator overloading is a lovely thing. So here's our concrete
algorithm again from before, just for ints this time, and if we wanna make it
generic, what do we do? That. Right? (audience laughs) So this isn't valid C++ by the way, this is valid C++ from the Concepts TS, but it doesn't work in the
working paper, the current draft. It's also really not
generic, it's overly generic. First and last aren't even guaranteed to be the same type in this. The return type is
probably gonna be the same as one of them, although maybe not. No, it is. I don't know, it's something, right? So from generic algorithms
we can talk about templates, and this is the way that I
love introducing templates to my students in my class. Templates are not magic, they just let us declare
placeholders explicitly. That's all they do. So we have this, now you have that. And this is still a generic algorithm, it's still overly generic because we haven't actually said what's in the algorithm, right? Instantiating templates
is the other side of this. So basically we're gonna generate a new version of our algorithm by supplying concrete arguments. The compiler will happily spam out a new implementation of it. So if I use my algorithm
correctly, I should point out, I call min_element here, or there, and the compiler will figure out that I'm trying to instantiate
for a pair of integers, sorry, pointers to integers, and spam out a new definition of that code that if you happen to
write in by yourself, would basically look like this. This is an explicit
template specialization. The name of the template is
just a little bit different, it's funky, it supports lookup, but it is exactly what
we instantiated before, including the preconditions,
which is an important thing. There are of course
some downsides to this, like templates are great,
generic programming is great, we got some problems. We don't directly express
our intent with templates. Like we don't say what's
expected for them, we just say, you know what,
this will take any type. Not really, but we'll take any type. No really, like not all types are allowed, but typename T, that says anything, right? We really end up with a system that's kind of akin to
K&R-style typing for functions. The typing was part of the definition, it wasn't part of the interface, passed something in, the interface blow, your implementation blows up later. We get that. We get in kind of a type-safe
way, it's just that we end up with these really nasty
compiler errors, right? You end up blowing up this
template instantiation stack way down the line and you get
these really lovely errors. And if you don't believe me, try this. This is guaranteed to be wrong. Anybody know why it's wrong? (audience members speaking faintly) - [Man] It can't take pointers directly. - Is that really the error? That you can't form a
reference to a pointer? That's what the compiler says. Can't form a pointer
to a reference, right. I mean, the compiler says
that's not the error, that's a symptom of the error, right? The actual error is that,
int ref is not an object. That's the error that we really want. Concepts aren't quite there to get us that level of specificity,
but we'd like to fix that. They're good, but we can't
customize error messages yet. That's the error. Okay, so this is where concepts come in. That only took 15 minutes? Wow, good stuff. I can slow down. Okay, so concepts let us
constrain template arguments. That's pretty much what they do, they let us directly express the intent, like what we want in our template. Here is an algorithm, what kinds
of types does it work with? Because we express those intents
as part of the declaration, we can also check them at the use site. So when you go to
instantiate your template, the compiler goes, I know
what your constraints were, I know what the concepts
were you wanted to use, I'll just go ahead and check
that you satisfy those, and if you do, that's great. If you don't, I'll give you an error. I'll prevent you from using this template, I'll prevent you from using whatever declaration you wanna use. We can also specialize
behaviors based on constraints, that's kind of the back half of the talk, when we talk about
type-based optimization. And they do as a bonus
give us better diagnostics. We're not requiring some kind of little, like deep language trick
to prevent instantiation, which means that when
you generate diagnostics for these things, we don't have to try to parse apart whatever
it is that you wrote, we can rely directly on what
the concept definition says, and say, this is the thing
that you failed to do. I have some examples of that
a little bit later, I hope. So, what is a concept? Well, it's kind of base-level. It's a named predicate expressing requirements
on template arguments. That's what a concept is. And we really have three
kinds of requirements that we want to express. We have syntactic requirements that say, these are the expressions you can use. For a forward iterator,
these are the operations you are allowed to use, or
these are the operations you must provide if you wanna
implement a forward iterator. We also have semantic requirements, and semantic requirements tell us, I have to be careful to
say this the right way, they do not define what the operations do, they require certain
behaviors of those operations. There is a big difference
between those two statements. As they tell us what
behaviors are required, not how they happen, not what they do. And then we have complexity
requirements that are essential when you actually wanna compose programs from generic facilities. We want some guarantees
that if you have a linear, like a constant-time iterator
going into an algorithm that you're gonna get a
linear algorithm as a result. We don't want computational blowup because you accidentally happen to put like an N cubed function
into a sort algorithm. Oops, N to the fifth,
or N log into something. It's too late in the day for math. An example of (laughs)
the forward iterator, just ForwardIterator. So for some value i, this
is the semantic requirement, or syntactic requirement, i++, or ++i must be a valid syntax. For whatever type, that has to compile. You need to be able to
instantiate that thing. The semantic part of that is
that when i is in a range, a bounded range of iterators, ++i moves to the next element, and that you can dereference that element, unless it's the last element in the range. That's the semantic part. I'm not telling you how
to move the iterator, I'm not telling you what
dereferencing must do, I'm telling you that those are the things that have to happen. If those things don't happen, then no algorithms using your
iterator will work, period. It's all about expectations. Okay, so really quickly, I
gotta answer this question because I'm not talking about it at all. Where do concepts come from? The answer is experienced
library designers. Seriously good concepts are hard to write, they take a lot of experimentation. You have to balance them against the way that people want to use
algorithms in your library, and refine, and it is
a process, and a cycle, and I see Eric nodding,
hopefully not sleeping. Nodding in agreement, yes. I'll go with that. So contact your library vendor. Go talk to the guys at Boost, go talk to the Standard Library guys. This is where you wanna go for concepts. If you find yourself trying
to write your own concepts, take a step back and see if somebody hasn't done it before you. That is my recommendation for that. You will find yourself
rediscovering abstract algebra if you've tried to
write your own concepts. So concepts and types. So a type can be checked
against a concept to determine if you're going to, if it's satisfied, or if it satisfies the
syntactic requirements. That's kind of an important thing. Whenever we ask if a
type satisfies a concept, we're only ever asking if the type satisfies the
syntactic requirements, because that's all we can
actually check, right? We don't have a compiler
that can look at an algorithm and compute complexity bounds, not effectively, not decidedly. We definitely can't ask if a type satisfies the behaviors
expected by an algorithm, that requires some
serious theorem-proving, and I believe that is not
in any way, shape, or form, decidable for anything other
than like trivial semantics. So we can't prove, for example, that your type does the
right thing for iterators, this is not something we can do. We can however say, you wrote an overload of operator ++, we'll go ahead and accept that. And that's our token to say
that you are a forward iterator. So we rely on syntax as a surrogate for semantics, which means that you had
better do the right things in your implementations
of these functions, right? So the way that we write this is we say, ForwardIterator<int*>. This is a way of saying, does int* satisfy the requirements? You can think of this as
basically like a function call, compile-time function call. Gives you yes or no, I'm
not gonna say true or false, but it gives you yes or no. Yes, int is a model of a forward iterator, so it satisfies the requirements,
we say it's a model, or that it models forward iterator, or that it is a forward iterator. If it doesn't, we just say it doesn't satisfy the requirements, it's not a forward iterator, right? So this came up a little
bit in Bjarne's talk and I wanted to hammer on it a little bit, it's important, I think. So the relationships
between types, concepts, and values is interesting. How many people here have
read Elements of Programming? I can strongly recommend that more of you should raise your hand. This is an excellent book. So a type defines a representation for a set of values and
operations on that type. That type, when you define it, when you write a class, it tells you how you're going to lay
out that value in memory, what bits are involved, if you have pointers,
how you interpret those as reachable components of your type, basically, how a linked list is a sequence or how a vector is a sequence. For example, int* is a 64-bit
integer on this computer, and, no, sorry, int* is
a 64-bit memory address, and when you increment it
it moves up by four bytes on my laptop. That's a type. A concept requires a set
of operations on values of a set of types. Now the way that Bjarne said it in his keynote this
morning, or Monday morning. Monday morning? It's been a long week. Monday morning was that a
concept specifies a set of values without any restrictions on the layout, which is more or less equivalent
to what I'm saying here. We're talking about a set of, like, if you look at the
semantic rules for a concept, we are talking about sets of values, but we don't really care that much about how they're represented. We don't care about the layout. We just know that they're
sort of these abstract values represented by like, a vector, right? Or a stack, or a sequence, or an integer. We don't care about the
layout, just the semantics. So ForwardIterator requires ++ to advance to the next element. There is a set of values that corresponds to all forward iterators. That's what we talk about
when we talk about concepts. What's a constraint? Well, a constraint is a logical expression that determines if you can use a type or value with a template. In fact, what I actually
showed you guys before that, that is both a asking of an
int* as a forward iterator, it's also a constraint
because we're asking if something satisfies a concept. We can combine them to create expressions using conjunction and disjunction. That's logical, analogical, or for syntax, I should have put those up there. So for example, we might ask
if int* is a forward iterator and int is TotallyOrdered. There's a nice conjunction. We can also compose, Eric reminded me of this this afternoon, we can also compose concepts
of using constraints. So we might say that a mutable iterator is one that requires a forward iterator and an output iterator, ooh,
and I left off the value type, so basically you're allowed to write to the iterator of that type
so you can modify its value. Let's see, a concept can also
constrain multiple types, these are not limited
to single-type concepts. That is really important
because most of our algorithms do not run on single types. Standard algorithms library has only a handful of algorithms that really are instantiated
over a single type. And actually, no, that's not even true, every algorithm operates, almost true, every algorithm operates on the iterators and the values underneath
them, they point to, so there's always at
least two types in there, except for advance and distance. - And the min.
- And what? - And the min.
- And min? Sure, and min. A handful of exceptions. Okay, so it is absolutely worth noting that all template code is
written in terms of concepts whether you know it or not, period. If you've ever in your life
sat down to write a template, you have some expectation about what that template is going to do, how you're going to use the
types in your template, right? Otherwise you're just coding randomly and I don't think that's
a recipe for success. You can always discover what
those concepts actually are, we can actually look into our
algorithms and write them. Now I know that Bjarne's talk on Monday was all about generic
programming being, what was it? Just programming? So I think we could
probably revise this to all generic code is written
in terms of concepts, which is good. I think we can probably
do a little bit better and actually just say all code is written in terms of concepts whether it's dependent or not,
and this is actually true. For any piece of code you can look at, you should be able to
identify the abstract things that you're working with, even if you're looking at a concrete type. So like a vector evinces
a sequence of integers, that's great, we can
identify that concept. And you should be able to
go through your library and match that onto that, right? Like your concrete algorithms are just instances of templates that you haven't written yet. Don't take that to mean
everything should be a template. (laughs) I should take that back, don't. - [Man] It's too late,
they've already done it. - Oh no, not everything
needs to be a template, it's just that everything
could potentially be a template if it was worthwhile to make it so. Not all code needs to be templates. All right, so here's
our generic algorithm. We're back to min_element
with our preconditions. So what are our concepts here? Well, let's not start that way, let's actually ask this question. Oh, I forgot the red. What are our requirements? So what are the smallest pieces of syntax that we have in our language, excusing the assertions at this point? We're just gonna kinda gloss over those. So what are the smallest pieces of syntax that we have in our algorithm? Well, there's that. So we have equality. Just for Iter, the Iter type. There's move construction,
because return moves. There's copy construction. Not an assignment. There's increment. There's equality again. Notice I didn't actually bump this up, I think equality is,
it's important that we, equality also covers
inequality or not equals. That's a nice thing to have. There's dereferenceable. And we're done, right? No.
- Assignment. - There's assignment, oh,
now we're done, right? - Less than.
- Now we've missed less than. Thank you Matt. So we have this guy. Is this a requirement on Iter? No, it's actually a
requirement on the return type of dereferencing these iterators, which I guess we'll just
call decltype(*first). Sure. There we go, decltype(*first), and the requirement is that it's ordered. And I'm not minimizing the requirement, I'm not saying that it
has to be less-than-able. I want this to be like ordered. That all of these things can
be put into a line, right? Now we can ask what our concepts are. We have requirements, we
can match them to concepts. So if we had our library of concepts, we could actually look at them and see what their requirements were, and actually match these up and figure out what concept
we're looking at, right? But I'm gonna shuffle
them around a little bit to make it easier to see. So we have four concepts that
are related to something, some move construction, copy construction, copy assignment, inequality, and then we have these other two for increment and dereference, and we have another one for order. And we get that, those are the concepts. So the first ones are
involved in regularity, I know there's a talk about regular types and why we like them. I wasn't able to go to that
one, but I hope it was good. Our inference, or our
increment and dereference I'm just gonna go ahead
and call ForwardIterator, we can gloss over the description of why it's not an input iterator unless somebody really wants to know. - [Man] Yes. - We have two iterators into
the algorithm at the same, or into the sequence at the same time. The memory must stay around for the computative
duration of the algorithm, must be a forward iterator. And then we have TotallyOrdered for, first, for decltype(*first). So those are the concepts. And I'm just gonna go
ahead and like hand-wave and say that every forward iterator is just inherently regular, we
don't have forward iterators that are weird like unique pointer, right? They're not move-only types. Input iterators get passed around, they're definitely equality-comparable, so we'll just erase that for now. So now we can ask what
our constraints are. Well, we can write it in English, right? So anything, or anything, wow. Any template argument
used with min_element must satisfy the syntactic and semantic requirements
of ForwardIterator. Obviously we have to be able to check the syntactic requirements and assume that the
semantic requirements hold, which is to say that it's regular, it's incremental, it's dereferenceable. Those all have to hold. Iterator's value type must also satisfy the decltype of *first is usually called the value type, must also satisfy the
requirements of TotallyOrdered, which means it has equality
and ordering operators, and yes, you do want equality
operators for orderings. So we have our constraints here, and this is the first
slide, what number is this? 72, the first time you ever see anything from the Concepts TS in the talk comes in at slide 72, fantastic. So this is our constraint. So this is a requires clause. It expresses as an expression what is actually required to be satisfied for our template, excuse me. I'm leaving off the decltype(*first) because *first isn't in
scope when we write this. Really what we're gonna do is just, again, hand-wave and
say there's a type trait that we can use to grab that out for pointers and iterator classes, and that's actually going
to be called value_type_t in the standard, so I'm leaning
into it and that's good. (audience member speaking faintly) Oh, you. (audience laughs) (audience member speaking faintly) Apparently I should modify this and all forthcoming
slides to Iter_value_t. (sighs) All right. I gotta spend more time in
the library, I guess. (laughs) Sorry, committee joke. These are our constraints, right? Okay, so constraints, they really serve to act as
preconditions on instantiation. That's not really what should, they act as preconditions
on declaration use. You can't use a constraint template unless your arguments
satisfy that constraint. It's not just instantiation. It's not really a
precondition to instantiation, which I've said before,
it's precondition to use. And I just said you can only use it if your constraints are satisfied. So if I try to call the algorithm with like a pair of integers,
which would satisfy deduction, like template argument deduction will happily deduce Iter to be two ints. They will totally fail these requirements when they're checked if you call, when you call the function, or when you try to take its address. You don't have to call
it or instantiate it, you can just take its address and you'll get the same
thing, because it turns out that an int is not a forward iterator. You can't dereference it. It satisfies almost everything
else, but you can't, actually everything else,
but you can't dereference it, nor does it have a value type, all right? Now there is a convention,
by the way, that we have. If we have constraints on
single types like the first one, ForwardIterator of Iter, we
like to actually write that in a way that expresses
that more directly. I actually don't wanna say that Iter is a type min introduces, I wanna say Iter is a forward iterator. And so for single-type concepts, we can lift that first constraint into the template parameter
list and declare it this way. Now the TS... I'm not sure it's even
worth talking about, I think there were other talks that actually got into what,
like certain issues with that. I'm gonna skip that. So ForwardIterator
declares a type parameter that applies a constraint, like this is exactly the
same thing as writing this, it's just different notation. Although in the new, in the
current or future version of C++ you can't, like they're not
equivalent declarations. If you write this and then you write, if you write this and then you write this, those technically declare
different functions and you'll get ambiguous
overloads and it's gonna be messy, so pick one strategy and write for it. The reason I actually like this is that every template parameter
gets an initial concept. You know what everything is the strong, like every template parameter has its strongest concept listed first. This convention actually came out of the Palo Alto meeting from 2011. 12? 12, right? We thought about it and
realized like, yeah, we're going be restating
some constraints sometimes, but that's okay. It's more important to actually see what something is explicitly, rather than burying it in a sequence of requirements later on. So we have a tendency to
lift these constraints. No naked typenames. I like that. Meaning that like it
might almost be a smell, like you've left something off, you're hiding information
in a requires clause. And I wanna talk about this one point too, because I think that
this sometimes gets lost. So TotallyOrdered requires, effectively for all A and B that A less than B is syntactically valid. It does not mean that every A
and B must be TotallyOrdered. This is not a total definition. We're not requiring every
single operation in a concept to be defined for every
single value of a type. Preconditions are real, right? Even if we had a good concept
that did for addition, like a monoid, right? Like we expect ints to
be additive monoids, we can add ints, they're associative, in fact, commutative, they're associative, they're commutative, this is fantastic. But they might overflow, and so even in an algorithm that requires, for example, additivity or integers, you still have the obligation to provide preconditions on that. Concepts do not hide
preconditions, period. Syntactic operations are
not total operations. Floating-point types are TotallyOrdered even though NaN exists, period. This has been a
surprisingly long discussion about this one point. Well, NaNs exist, so how
can float be TotallyOrdered? Because there's a
precondition on algorithms that use the operation. That's the short answer of it. Random access iterators
are TotallyOrdered, even though the relation
is really sparse, right? Fixed-precision integer arithmetics, fixed-precision integers are arithmetic, even in the presence
of undefined behavior. That's just a precondition, make sure you don't invoke
the undefined behavior, it's not the responsibility of the concept to tell you what you cannot do. It is a responsibility
of you as a programmer to check your work. I should be able to use signed integers, like if I have a concept called Natural, I should be able to use signed integers as an operation for that,
as long as I make sure that the values I provide to
the algorithm aren't negative. It should still work. I can define a concept
for directed acyclic graph and implement an adjacency list, and use the adjacency list with
the DAG with that algorithm. That should work. I just better be on my
best behavior and make sure that the adjacency list doesn't
have a cycle in it, right? That's my responsibility. It's not the concept's, it's not the algorithm's, it's mine. Well, it's the responsibility
of the algorithm to appropriately document
the preconditions, but satisfying them is my
fault, my responsibility. Constraints identify abstractions in generic code or in templates,
or in code in general. They are not a restatement of syntax. If you find yourself writing what, thinking what are the
constraints of this algorithm and you end up with,
well, it's what I wrote, you're doing it wrong. Go try to find concepts that reasonably describe
those abstractions. You might have to work
really hard to find them. Finding good concepts is work, right? Like the STL, the
Standard Template library, was a 20-year journey,
and still not finished. And then I unfortunately
heard this morning discussion about requires(auto). (audience laughs) No. (audience laughs) No. This is insane. So the idea being that the
requirements of the template are exactly what the template says. This is the easiest way to create the most brittle library you will ever see published, period. I was trying to find an
appropriate meme for this before the talk, but I, I just, I don't think I could
really express my anger at this proposal, this suggestion. I couldn't find the right meme. Like winter is coming? No. This is fine? Burn your house down? (audience member speaking faintly) Yep, I thought about it. (laughs) So overconstraining templates like, in the process of constraining templates we can do one of two
things, we can overconstrain and we can underconstrain. Like the idea is hopefully
to find a just right. I think that you should err on the side of overconstraining templates. The idea that you start
with strong abstractions, you have reasonable requirements, they have strong semantics. Like forward iterators are
great, I love forward iterators. Input iterators are a little funky. They're pretty stable these days but they can be a little
bit weird sometimes. Move iterators are really weird, right? So if you start with strong
concepts in your algorithms, if you map the, if you
map your requirements onto strong concepts, that
will give you greater freedom as an implementer to
make different choices in the body of your function. Declare a local variable. Oh no, local variables, terrible. It also helps guard against
accidental misuse, right? Like the stronger constraints you have, the less weird things that
you allow into your template. Because as soon as you ship a
template and somebody uses it, they will find some way
past your constraints and depend on weird, broken behavior that you've allowed to happen. So lock it down early on and then generalize later as you go. That's kinda my approach to this. Like I'd be totally happy if we sort of didn't have input iterators and then thought about
it more slowly over time. But fortunately, Alex did that for us, we don't have to worry about it. I strongly recommend to
avoid overgeneralization and premature generalization. Just because you have a
good idea for a template doesn't mean you have
to accept more things. Start strong, keep your
constraints fairly tight initially. Weaker semantics make
it much harder to reason about the behavior of the algorithm. They make it harder to
prove that it's right, because now you have to have proof for a broader class of things, it might have varied
behavior inside, right? And the weaker your semantics are, and this is kind of an
odd thing to point out, the weaker your semantics are,
the more constrained you are as an implementer of the algorithm. You have fewer choices you can make about the way that you wanna
implement an algorithm, the way that you wanna do something. And I say morally because
concepts does not impose any requirements on
you as the implementer. You can write whatever
constraints you want and do whatever you want in
the definition, doesn't matter. You're free to do whatever you want. Put some couts in there, it's fine. But if you wanna make sure that every time somebody
uses your template you actually instantiate code, then you are effectively constrained by the implementation, right? You can only work within the set of
requirements of the template. By the way, the fact that we have regular as a requirement for
forward iterators means that we can basically
create arrays of iterators if we need to cache positions. We can copy them, we can move them around, we can put them into vectors and then call unique if we want to. So we have a lot of freedom in the things that we can do just because
they happen to be regular types. As soon as we make our
requirements non-regular, we can't do any of that
stuff, like if it's move-only, we limit, we narrow the set of operations that we can apply to it. At the extent of all this, if you overly aggressively generalize, you'll find up ending, find yourself writing syntax-only concepts or syntax-only constraints
like has plus, has minus. Those are semantics-free concepts. It's really hard to assign
meaning to something that just says you can write A plus B. Okay, well what does that mean? Well, no, literally it
just means you can write it and it computes a value. It is a semantics-free concept. It doesn't mean anything, and
they don't compose very well. Okay, so part two of the talk. Part 1 1/2 of the talk, doing good. All right, so what can we constrain? This will go faster. Well, we can constrain
the function templates, we've already seen that, this is good. Oh, did I actually do that? I put InputIterator on that one, cool. That's wrong. We can constrain class
templates, here's a vector. We constrain with object type and our allocator is AllocatorOf<T>. That AllocatorOf<T> thing
basically just means that I want to say that Alloc, the type Alloc is an allocator with T as the second parameter. There's just a little
rewrite that we apply. It means this. That's exactly what it means. Forgot I had that slide, good stuff. So if we use a constrained class template, we write with v1 a int
because int is a vector, or int is a value type, int
ref is not, so we get an error. And I wanted to point
out the incomplete case just declaring a pointer to a type that would fail the concept. We don't require anything
but the definition, you just can't use the type because you fail the constraints,
that is also an error. And when I say it's an error, I mean this is what you get from GCC. That's it. I wish it were better. It's still better, it's three
lines, four lines, right? I just wish I couldn't say... I wish I could say more than
evaluated to false at the end. I would like that to
say not an object type, that's all I want. You can constrain variable templates, if you need to. (laughs) You can constrain alias
templates, here's my PMR vector. So the object type and we just put the
polymorphic_allocator in there, yes? (audience member speaking faintly) No, I don't believe, oh, so can you specialize a type alias? And the answer is no, I don't believe we will ever
have type alias specialization. You can constrain member templates. So here's a compare constructor. Again I'm using a little partial concept ID trick, convertible. So X is convertible to
T, Y is convertible to U. This works nicely, so you
get a okay in the first one, failure in the second because you can't convert null to an int. And I think this is my
favorite feature of concepts, and if you've ever
written a class template or a class template I think you'd agree, especially if it's a library template. You can constrain non-member, non-template members of class templates. So in this case, the requires clause goes
after the declarator because it looks really
odd if it goes before, and you run into some
weird parsing ambiguities, so we put it after. So pair requires your first and second, your T and U to be defaultable. Default constructible is too
long for the slide, clearly. Your copy constructor requires
T and U to be copyable, these are good concepts to have, and that will just work
the way you expect it to. If you provide types
that are not copyable, like a unique pointer, you
can default construct it, you can't copy it, you
get the error out of that. And that's it, those are the
things you can constrain, that's what concepts give us. Okay, so, we talked
about generalizing code and we talked about what you constrain. How do we get fast code? Well, templates help, this really isn't the
responsibility of concepts, this is mostly on the back of templates. They generate concrete
versions of our code. You instantiate a template,
you get concrete code, it goes through the optimizer the same way as if you'd written it by hand, that gives us really fast
generic libraries, period. But one of the things
we can do with templates is actually specialize our algorithms. There's three ways we can
think about doing this. So we can think about
value-based specialization, and this is really just pattern-matching, not a runtime, there's a runtime thing, not really appropriate in this talk, but I wanted to throw it
out because it's a thing. Look at your data, pick a better algorithm
based on your analysis. Type-based optimization
lets us provide overloads that customize behaviors for, based on the pattern of a type. So if we have an optimal representation, or if we have an optimal
implementation strategy, we can kind of hook into the
overload resolution mechanism and select that as we need it. And I almost gave the vector bool example, but I was like, I cannot give vector bool as a good example of this. It's just too weird. And then we have
constraint-based optimization, which is new in concepts. And this lets us actually
overload concepts or overload algorithms for operations with stronger or more requirements, right? So here's my example of
type-based specialization. It's a bit hacky, I was trying to rewrite
all of this this morning. So if I wanna sort a
random access container, I might have a random access container, apparently called vec. Guess what that is? And you just call a standard
sort and begin and end. But if we wanna sort a list, like we can carve out a
specialization for list. You might actually think that you might actually find
this to be more performant than sorting the link-based rearrange, the link-based rearrangement
on the class, right? You copy it into a vector,
you sort the vector, then you copy it back out. Even though you have the extra copies, it might actually give
you better performance than sorting normally. So this is a type-based specialization. But you should measure
this before you deploy it. I don't know if it's actually better. Concept or constraint-based
specialization is predicated on this notion of concept refinement. So really we say a concept C
refines some other concept D if whenever C is satisfied,
D is also satisfied. We can also say like, C strictly refines D if it refines D, but not vice versa. It's a nice thing to have. Really we're saying that C has
the same requirements as D, but a little bit more. So we require more operations,
we require stronger semantics like a ForwardIterator
is an input iterator, but we have certain extra guarantees. A BidirectionalIterator
refines an input iterator, but we have additional operations. Random access is bi-directional,
so forth and so on. The thing is that those extra requirements give us hooks to exploit
additional properties for optimum behavior. They let us do more things with them, they expose more structure that we can rely on for optimization. Which I think I just said. So a concept requires a set of operations on values of a set of types. Should say of a set of types. A refined concept provides
more properties we can rely on, so it lets us do more things,
we can build more algorithms based on those additional requirements, or we can exploit their structure
for optimum performance. Unfortunately, we don't actually
compare concepts directly, we actually compare constraints, and this is where this
algorithm subsumption comes in. So if you have two constraints, we really just say that one constraint P subsumes another constraint Q if you can prove that
P logically implies Q, and that implies really hooks into this refinement argument too. So yes, GCC has a small
theorem prover in it, this is how you do function overloading. So this is the mechanism, we actually use this for
computing refinement also, and the idea is that the
compiler will select overloads based on the most constrained declaration if all the types are equivalent. So all other things being equal, if you still have two overloads and they're both constrained, or one is constrained
and the other is not, then you'll choose the one with the most, the strongest constraints. And here's the kind of
canonical example of this you saw this morning with Bjarne, or Monday morning with Bjarne. This is the distance algorithm. We can have a slow distance
for InputIterators, or should arguably be ForwardIterators because if you call this
for an InputIterator you will consume the entire distance. But it's an iterative algorithm, we can only move one step at a time. Random access InputIterators, they expose a lot more structure. They expose more operations, we're allowed to subtract them, they provide better complexity guarantees because we're allowed to subtract and that's a constant-time operation, which means that we have a
linear algorithm in the top, we have a constant-time
algorithm in the bottom. Unfortunately, I love
that overloading example, it's fantastic, this is a great feature. Unfortunately, for algorithms
with complex requirements, this gets really tricky. You can really easily find
yourself writing constraints that just are incomparable. Neither subsumes the other, or neither imply logically
implies the other. And the example I'm gonna
give for that is copy. I love this algorithm,
this is a great algorithm. So the algorithm requires an InputIterator and an OutputIterator, and we write values of the InputIterator into the OutputIterator,
and that's the algorithm. And we know that we can optimize this because if we have an array
of TriviallyCopyable types, we can just use memcpy. And if you can use memcpy,
you should use memcpy. This will result in ambiguity, it'll be ambiguous in one case and not in the other, and it's surprising. So if it happens to be the case that your input is actually
an array of constant pointers, or a range of constant
pointers, it'll be ambiguous. If they're non-constant
pointers, it'll choose the first, which is weird, but it's true. I actually tested this this
morning, it surprised me. Then I spent an hour
trying to figure out why, and then I had to write these slides. So we don't really want
to deal with arrays, so I'm gonna lift this and
make a little bit more general. We really wanna work
with ContiguousIterators. So again, a ContiguousIterator
is a random access iterator that guarantees that every object occurs in a monotonically increasing address, and then I'm gonna just wave my hand to say MemCopyable is a real concept, and that means what you think it means. Names are good. Totally ambiguous, by the way, right? It is really hard to prove
that the constraints in the top imply the constraints in
the bottom, or vice versa. And I couldn't work out
whether or not there's a bug in the compiler that prevents
this from being non-ambiguous, or whether the constraints
actually work out to be non-ambiguous, or ambiguous. But either way I couldn't make it work, and so I realized, okay,
so this is a weird case. Maybe don't rely on overloading for that. If constexpr is a lovely thing, we can still rely on this. The downside of this is that it makes the algorithm closed, right? We can't come along later
and discover new iterators and sort of extend that
overload set as we go without modifying this definition
now, but maybe that's okay because how often do we discover new categories of iterators? Oh no, I said category. New concepts for iterators. It doesn't happen very often, right? ContiguousIterator we've talked about for eight years in committee, I think? Seven years in committee,
something like that. And that's the only one since the original STL has been proposed. I think it's the only one, maybe there's been like a few others, but for very obscure,
obscure data structures. So, use if constexpr. The main interface is the same, it's just that under the hood, if it turns out to be MemCopyable, then you use the fast algorithm, if not, use the slow algorithm, great. That's what those look
like, just rename 'em. You can also use overloading,
this specialization trick with class template specializations. This is almost what
standard less looks like, I think I got the the void overload right. So the idea here is that
we have a primary template that is actually constrained
by a disjunction, which is rare, and they
don't show up very often, but they serve a good purpose. It's an interesting thing. So less, you can either use
with a TotallyOrdered type or you can use it with void, and it actually defaults to void, I kind of left that out
of the implementation. If it's void, if you use the default, then you get a member
template call operator. If you use it with an actual type, then you're constrained to be
that type that you specify. So the idea is that you can
provide a polymorphic less if you don't provide anything there. And we were kind of
thinking about the right way to actually write the constraints
for these a while ago, and my initial suggestion was, well, just leave the
primary unconstrained. Then I realized that if you do that and you pass in something that is neither TotallyOrdered nor void, then you just have this
weird incomplete type and you get this kind of unexpected error, like your type is incomplete. And really we actually wanna catch that at the point of declaration, so if you try to use a less with something that is neither TotallyOrdered nor void, then that should be the error at the point that you first use it, and so that's what this
disjunction gives us. Now if you actually do use this with a TotallyOrdered type
like logical implemented, logical implication is great, TotallyOrdered implies
TotallyOrdered or void, so that becomes a stronger concept, or stronger overload declaration. So you actually end up
selecting the right one. Okay, so that actually worked out well. I love generic libraries,
generic programming is great. I wish I had the opportunity
to do more library work. I would leave the idea that, would leave with the idea
that definitely start from concrete examples and work backwards. You wanna work towards
abstraction, don't start from that. Abstraction is the destination,
not the starting point. I think that's pretty much where I was gonna end up with that. Oh, and yeah, concepts are
definitely all about semantics. We only check syntactic requirements, but I assure you that if you
get the semantics of those, if you get the behaviors wrong, if you forget the preconditions are real, then nothing is going to work out. Things just don't click together, they don't connect quite the same way. You have to pay attention to the semantics of these things. Okay. My last conclusion is clearly I should have scheduled a second talk to finish the other part of this talk, the real everything you need to know, but I think this actually worked out well. So, any questions? (audience applauds) - [Man] In your copy example you showed that it was hard
to resolve the ambiguity with the ContiguousIterator
versus the forward, was it ForwardIterator, right? - Input. - [Man] Yeah, InputIterator, right. Since the case you care
about with the fast copy, those iterators are also InputIterators, could you not have simply
restated that requirement such that that subsumed the other? - I think so. I was writing this example this morning and I didn't wanna dig through the subsumption implementation in GCC that I rewrote over the summer. It might actually be correct
the way it's written, I don't know for certain. There might be a compiler bug. - [Man] So let's say I see a function that takes some sort of concept. Oh, let's say I see a function that's taking some sort of concept, and I wanna write a type that
will satisfy the concept. As I understand currently you just have to look at
the concept definition and make sure you implement everything and then hope it compiles. Is there any way or any planned way to sort of declare or enforce
that a type definition will satisfy some concept
like impl Trait in Rust? - You can always statically assert that your type satisfies a concept. In fact, I think that's usually the thing that most people do when
they start using concepts. Does vector satisfy a sequence? Does my type satisfy this? They use static asserts initially, and yeah, you can do that, but there's nothing automatic for it. You're still gonna have to
opt into testing somehow. - [Man] Okay, thank you. - [Man] Hey, so thank you
for this talk, it's awesome. I'm a simple person, and I don't go into all this
generic stuff very often, but I do go in for the fast,
so you had me at fast earlier. One of the things that
one typically wants to do is to implement an algorithm
that isn't actually generic, but you don't want to necessarily use like a virtual-dispatch-type thing. A pattern that you use is
you templatize on the type T, and then you call methods
that you assume will be on T, like T, here's a foo, T, oh,
that's the end of the packet, T, here's the next packet kind of thing. So you earlier said you shouldn't be writing
your own concepts, but it seems like concepts
to me are a better stand-in than the thing that I currently do where I write like an empty struct and say this is what you need to have. Have you got any color on that? Is that wrong? - No. If you're working in a domain where you have sort of your
own custom abstractions, you're going to have to roll your own. I think that the reason I
say go find other people's is they tend to be so hard to write and if you're gonna publish libraries, it takes a long time to get them right. But yeah, there are definitely cases where you're gonna be
internally in your own system just generalizing something
to reduce boilerplate. You will roll your own concepts. - [Man] So I can use like,
just as a paraphrase, a concept could be the interface for an internal templatized
thing, thank you. - Yeah, and you can definitely use that to break dependencies
in compile times too. Yes? - [Man] So I have a fundamental
question about choosing the most general strongest concept. I mean, I want you like
to elaborate on that because sometimes a concept, you can write them how strong they are, there are two compt constants for example, and mathematically speaking,
like something is a group, something is the same letters, you can't really rate one above other, and you really would need to
like logically put something that the previous style to
abstraction into like naming or what it is and then just
put like the requirements as that boss like symmetrically
on what they should be. Could you like elaborate on that? Why would you need to like
put this just like one? - I'm not--
- What is this consistent way to do it? - I'm not sure I understood the question. Can you-- - [Man] Why do you need tools
to like select such as this, I'm not talking about like naked typename, but why do we need to
select the strongest concept when you wanna put it in there? One. - I'm having trouble understanding you. I can't quite hear what you're asking. Step into the mic, please. - [Man] Okay. Whenever you say what the time is, this ForwardIterator
or something like this, why would you need to select a strongest, single strongest concept
to put it in there in the first line? Because in many situation,
you do have this concept that everywhere is symmetrical, you can't really say the one
is stronger than another, and to be consistent you would need to put
it later or require it. - That was just the convention that they got out of
the Palo Alto meeting, and it seems to work quite well. I like it. There's nothing that
requires you to do that, I just don't like looking at generic code that doesn't have, that
doesn't have the constraints announced loudly where they're declared. - [Man] You also said
that you also need to work from the concrete to abstract, not from abstract to concrete. Sometimes, I mean, working from
abstract to the concrete is maybe easier in some cases when you're writing like
really general stuff, are you going from the series then you're trying to
generalize algorithm? What's really like the main-- - I think the ability to work
from abstract to concrete is something that is really developed with practice and experience. I don't recommend like starting from that, but sometimes if you really
have a very crisp view of what you want, you're
probably gonna do it anyways. Like I know that I can
usually write a template and get it mostly right
in a couple of tries, but if you're building a
library from the ground up, then you definitely want to start with better examples, like more examples, and then generalize from there. All right, back over here. - [Man] You had slides that
you said one of the constraints could be the complexity of operations, and I know that we can't really discover what's the complexity of an operation, but we could have tags or something that indicates
that that implementation is using the to ON or ON2, or... Is that something that is far off? - Maybe. I would really love to see
somebody try to do that using a trait system or type traits to actually approximate
algorithmic complexity and see how they compose. That's interesting. I think it would be, I
don't know how likely that would become like a mainstream tool, especially for the standard library. It would be... I would love to see somebody do that. That would be fun. - Okay.
- Yeah? - [Man] Is there any chance
you could scroll back to the last example with less on it? - That one? - [Man] Yes. My understanding is that
the less void specialization actually takes two different types so that two different
types can be comparable, and I'm curious-- - I agree, I was looking at
cppreference this morning and couldn't find exactly where that was, so I just wrote it with one. But I think you're right, I think it actually does take two types. - [Man] But that would
be expressible though? - Yes.
- Okay. That's what I was curious--
- I don't remember the name of the concept that we use for two-type TotallyOrdered anymore. TotallyOrdered with? TotallyOrdered with. - Okay, thank you.
- Yes. - [Man] I was just curious what the implications would
become regarding SFINAE and how that was to
play out with concepts, and I understand that the CFINAE
really wouldn't be a thing because it would just eventually be SFINAE for that particular concept. But if I were to get an error and I wanted to achieve a
certain concept, and I missed it, it seems like the errors that I would get would continue to roll up
until we were out of options, then I would get the most
generic error at the time. Is that the way that would play out? - You--
- Failed to satisfy all of the concepts. - I'm finding an example here. So I didn't... Give me something, let's
go back and look at the... Go back and look at this
one because it's explicit. So the way that we
actually check constraints is not through normal template
substitution or evaluation. Constraint satisfaction
actually interleaves template substitution with evaluation. So you substitute into the first, the left-hand side of a conjunction, and if that fails, you stop. So we actually short-circuit
template substitution over logical operators. The SFINAE stuff you
have to work pretty hard to get that to prevent
errors from propagating all the way across expressions, so we do some pretty nasty stuff. But this is designed to prevent you from having to worry about it at all. - [Man] Sure, I know that
you had picked in some cases template typename Iter
versus template concept, and I believe that would
provide the flexibility there. A second question if I may? In the event that I wrote my own concept, I don't know where they land
within namespace regard. If I were to write my
own in a static library and one of my co-workers
was to write a concept in a different static library,
and we meet at link time, which sort of error would I
get for an ambiguous concept? - Repeat that one more time. - [Man] I'm just curious
if I define a concept and one of my co-workers were to also define a concept
elsewhere and they meet, I will happily accept the
answer, don't do that, and that's fine with me,
but I was just curious if there was any expectation
for how that would behave, or if I would just get a, oops, you accidentally pass satisfaction but then got the other. - So the short answer is don't do that. - [Man] I can come back later so-- - Concepts are top-down, yeah. - [Man] Thank you. - Yes? - [Man] A while ago I wrote
my own concepts library, and the concept for iterators, well, the concepts for
iterators that I came up with took a second convertible
to this argument, because this pattern is literally the most frequent pattern that I found. I mean, if I didn't know, I put in void and like it did this. Given that the choice for the
standard library is different, I'm wondering what I was thinking wrong, because I know Eric is a
very smart guy. (laughs) - So what was the final question, again? - [Man] Well the final question is, why was this choice made? Because I want to think better. - The choice was made during
the Palo Alto meeting, where we actually went
and looked through... The initial choice was made
during the Palo Alto meeting, where we sat down with Alex Stepanov. I was there, Bjarne was there,
a lot of people were there. Sat around a table, went
through every single algorithm in the standard library,
looked at the syntax, just like we did, like
what I did in the slide, and pulled out concepts from that. We actually designed the concepts
around those constraints. We didn't start with anything in our mind, we actually derived the
concepts from the algorithms. And then the Ranges TS
kind of picked that up and used that as a basis, and then got reviewed for
several years by the committee and made some changes,
but not, I don't think, fundamentally drastically
different changes. And so that's what we ended up with. Are there opportunities
to do other things? Sure, I know that Dietmar Kuhl
has some very strong opinions about the right way that
iterator should work, and they would be kind of incompatible with the standard library also. Everybody kinda has a different view of what their kind of core concepts are, but we get standard concepts. - [Man] Thank you. - [Man] Arthur mentioned
something this morning, and please correct me if I'm wrong, that one difference between the TS and what got in the white paper is that if you have one concept
that's A and B and C, and you have a different
one that's C and B and A, that the TS would know
that those were equivalent, but that the white paper says
that those are different? - No.
- That's not true? - No, if A, B, and C are all concepts, then they'll be the same. Same outputs. - [Man] So somehow you boil it down to understand the
equivalency between them? - Yeah, basically it's like
concepts are kind of transparent in this analysis, so you
just sort of explode them out into all their smallest parts,
their atomic constraints. Basically, subsumption works
on like an equivalent search, and so if you happen to find
a subset of one in something, then that's satisfied. - [Man] So if A checks A prime, let's say, and you have A and B and
C, and A prime and B and C, would those boil down? He had some example with a type trait, and the concept that just seeing, checked that single type
trait not being equivalent, is that the same thing or different? - Yeah, that's a little bit different because type traits aren't concepts, they're atomic constraints, and atomic constraints
are never equivalent in that kind of way,
they're only ever identical based on what template they're defined in, where they appear in
the program essentially. - [Man] Okay, so try
to use mostly concepts and not atomic constraints maybe? - Yeah, and this only really matters if you're overloading with them. If you're not overloading with them, then use whatever you want. - [Man] Overloading is one of
my favorite things about 'em, so I'm excited about
that use case, thank you. - Use concepts, yes. I'm gonna do this one. - [Man] Earlier you showed using vector with a reference to int gives an error, and you didn't like the way
the error was displayed, and I'm thinking in that case, the concept acts somewhat
like a static assert, so would it be possible to
have an optional message that we give with each
item in the concept? - Yes, and what I think what
I would really like to do is have an attribute on the concept that emits the message or
a format of the message for what happens when you fail it. I want attributes that let you do that. - [Man] That sounds good, thanks. - All right. - [Man] Andrew, I do have a
question for you, actually. - Good.
- And that is, you mentioned that in a requires clause we can combine concepts using
disjunction and conjunction. You did not mention negation, and I'm guessing that
you can't use negation, and I can think of various
reasons why that might be. Am I right about that, and is it basically because it was too hard to implement and so they wanted to
push it off to the person to write a constraint
that sort of takes it in, or is there some deeper reason? - So the constraint language, the way that we partially order overloads, is based on a very, very
simple propositional calculus. If we add negation to that,
I believe the solver for that is effectively equivalent to Prolog. Is that more or less right? Something. - It gets very complex.
- It's an order of magnitude more complicated.
- Okay, all right. - But you can't use negation,
you can write it there, it's just that it turns out
to be an atomic constraint, and so it's not comparable
with other concepts. - [Man] I see, okay. - In fact, I had slides
where I talked about negation and I took them out
because I already had 115. - [Man] Fair enough, thank you. - Okay. The session is over, unfortunately, so I'm being flagged down. I believe that you had announcement? - [Man] Yeah, I had an
obnoxious request for Andrew, which he was kind enough to accede to, and that is I just wanna
draw everyone's attention to a newly added Birds
of a Feather session at 8:00 a.m. tomorrow morning. The topic is such that in a
gathering of generic programmers maybe it won't be of any interest, but maybe it will for some of you. It's on runtime polymorphism
and sort of how to deliver it from the shackles of object
orientation and inheritance. Anyway, that's all. Since it was just added to the
schedule an hour or two ago, I'm trying to publicize it. Thank you, Andrew, and really
thank you for a great talk. This was very clarifying. - Thank you. (audience applauding)