Operator Laws - Check

Reference

A={xproperty}A = \{ x | property \}

  • N\mathbb{N} is the set of Natural numbers (positive integers including zero).

  • Z\mathbb{Z} is the set of Integers numbers (also negative).

  • \cup is the union operator symbol.

  • \cap is the intersection operator symbol.

  • \setminus is the union difference symbol.

  • \in is the "belongs to set" symbol.


  • A statement "A" can be only True or False

  • \lor is logical or

  • \land is the logical and

  • ¬A\lnot A indicates the negation of "A"

  • x(statment)\exists x(statment) indicates that the "x" present in the statement is a variable that makes the statement True for "at least" a value of x.

  • x(statment)\forall x(statment) indicates that the "x" present in the statement is a variable that makes the statement True for "all possible" values of x.

  • A    BA \iff B is a "if and only if", making A and B having the same truth value.

Here's a recap of some properties:

  • Commutativity: ab=baa \circ b = b \circ a

  • Associativity: x(yz)=(xy)z=xyz x \circ (y \circ z) = (x \circ y) \circ z = x \circ y \circ z

  • Identity element e: ex(xe=ex=x)\exists e \forall x (x \circ e = e \circ x = x)

  • Zero element e: ex(xe=ex=e) \exists e \forall x (x \circ e = e \circ x = e)

  • Distributive: x(yz)=(xy)(xz) x \circ (y \Box z) = (x \circ y) \Box (x \circ z)

  • Inverse: x(yz)=(xy)(xz) x \circ (y \Box z) = (x \circ y) \Box (x \circ z)

Operator check

We've seen some of the laws and properties that are possible to express in mathematical notation. Most of them involve a rewrite of the expression, or invoke some element with specific properties in relation to the operation described.

Most of these properties are either taken as assumptions in our mathematical system, or proven. Any assumption we make, forms the basis of our own mathematical understanding and system. It's up to us making sure that these assumptions make sense, in order for the castle of cards we're building to not metaphorically collapse upon us. These assumptions are called axioms.

If for example, we took as an assumption that "1=2", then a lot of operations would entirely break, and their relative properties. Taking this into account, we could say things like "x*1=x*2" no matter what number we substitute to the x. Solving for that equation would give "x=0", basically implying that our structure of numbers fell apart, and all numbers are now the same quantity.

A lot of the laws here presented are either demonstrated (in better places than this one) through rigorous mathematical logic, or assumed via axioms. Since we need to finally learn how to manipulate equations, we're going to try and "prove informally" the most we can.

Addition

Law of commutativity

Specializing the law of commutativity towards addition gives us (implicitly for every number xx and yy):

x+y=y+x x + y = y + x

It's important to note that "x" and "y" here can be substituted for whole subexpressions freely. So our law could mean also mean things like:

x+yz=yz+x x + y*z = y*z + x

or

x+(y+z)=(z+y)+x x+(y+z) = (z + y) + x

Intuitively, we can kind of convert our group of x notation, back into what it really was all along, a set with all different/unique elements. If we had set A={a,b,c}A = \{a,b,c\}, we denote A=3|A|=3, that is, the "|...|" notation indicates the cardinality of the set. If we had set A declared as before, and set B={d,e,f}B = \{d,e,f\}, then the cardinality of the union AB=A+B|A \cup B| = |A| + |B| (this only holds if all the elements involved are different, why is that?). And therefore, the number of elements in a union of sets, can be equated to the sum of the cardinality (set sizes) of the smaller sets. Since that union does not care whether we swap the elements around, we can (intuitively for now), logically deduce that the same is for addition.

The above paragraph is a little bit daunting, but it's there to show how set notation can support our normal operations, proving these laws. Some of these laws feels so "banal" and yet, it can be very difficult to properly prove that these "fundamental" things actually work. We usually had to take most of these laws as "self evident" axioms.

If you don't like sets, the "s(s(s(s(zero))))" notation works in computer science, and can be proven by induction (a logical method to prove things for infinite terms, but we'll revisit it after), that the addition commutation works for any amount of "s(s(s....".

Law of associativity

x+(y+z)=(x+y)+z=x+y+z x + (y + z) = (x + y) + z = x + y + z

In the rest of these, I am not going to prove too many things, because it would be exhausting, and ironically, would require more advanced math again. The entire point here is to get a feel for these laws, and how we can invoke them mechanically, so that we can be sure that we're not manipulating the expressions wrong.

Associativity allows us to care less about how we do operations. We can break the order some expression gives, as long as the operation involved is the same. For example:

x+(y+(z+a))=x+y+z+a=(x+y)+(z+a) x + (y + (z + a)) = x + y + z + a = (x + y) + (z + a)

But if we were to break out ....

x+(yz)(x+y)z x + (y * z) \neq (x + y) * z

The \neq symbol stands for not equal. While there could be a combination of values that might allow the above expression sides to be equal (say, x=y=z=0x=y=z=0), it means that generally the above exchange is not allowed, because it can change the results of the expression entirely.

3+(45)=(3+4)5 3 + (4*5) = (3+4)*5 23=35 23 = 35

And the above equation is clearly "False". So we know that association needs to use the same symbol. In our previous diagram, it means that we can swap around the level of the trees, as long as they are both "+". As pictures are a thousand words....

Imagining "a" "b" and "c" to be any subexpressions, the above tree is (a+b)+c(a+b)+c and then...

It is now a+(b+c)a+(b+c).

It seems now that the "first" operation has swapped. And the direction of the arrow from "+" to "+" has now inverted. This shows how associativity helps reduce a bunch of expressions into a much more concise notation, without having to determine the exact parentheses order. This is especially relieving on way longer repeated additions, that can now be expressed as 3+4+5+6+7+8+9+103+4+5+6+7+8+9+10 rather than ((((3+4)+(5+6))+(7+8))+9)+10((((3+4)+(5+6))+(7+8))+9)+10 (one possible order, and this would be entirely needed, if associativity didn't hold!).

With associativity and commutativity back to back we can now swap freely operations both in the symbol order, and priority of which symbol to reduce first.

Identity Element

So, there is an element that fulfills the property:

ex(x+e=e+x=x) \exists e \forall x(x+e = e+x = x)

If we take the above equation, x+e=xx+e = x, we know that if we subtract the same quantity from both sides the equation stays the same, so x+ex=xxx+e-x=x-x becomes e=0e=0. Since we now have a possible value that works for any x (as we never picked any value for it), zero is our "identity element". That means that adding 0 is always a "free" operation. You can add zero as many times as you want, in any order, but you won't change the actual value.

Why is this important? Since we said we can add zero as many times as we want, if we had any equation that is equaling 0 in some way, we can now add it in, hoping to further simplify the result. (With just plus and minus, the examples aren't too stellar, as everything can be also done with techniques that add or subtract the same quantity to both sides of the equation. When we mix up multiplication, then some interesting manipulations can start happening.)

By the way, in the Identity property, why did we have to specify both "e+x" and "x+e"? We only substituted the \circ with ++ here. Since the property above is written with a general operator in mind, it might work even when "commutativity" doesn't.

Property of zero... ?

ex(x+e=e+x=e)\exists e \forall x (x + e = e + x = e)

This property does not apply. Without confusing ideas here, the point is simple, we are searching for a number that when added to any other, it reduces to itself. Pick "4" for example, and the only other number that when adding "4" to it makes "4" again is "0" (4+0=44+0=4 basically), but this "4" should have worked for every other number, and that clearly wasn't it. Every other number is also the same, and with a lot of work and inductive proofs we finally prove that this "e" does not exist.

Inverse operation

Addition is the inversion of subtraction.

x+y=z    zy=x x+y = z \iff z - y = x

There's not much to demonstrate here, it's a straightforward declaration, as what subtraction is actually defined as.

Subtraction

Now that we've checked addition, we can speed through subtraction. Remember, in the formulas with no quantifiers (,\forall,\exists), it is actually implied every variable is operating on every possible value. Therefore if the property doesn't hold for a combination of values (ergo we found a counterexample), we can say the property does not apply. (It's always easier finding a counter example and be done, rather than having to methodically prove that every single value works out.... maybe).

Associativity

x(yz)=(xy)zx-(y-z) = (x-y)-z

Take x=1,y=2,z=3x=1,y=2,z=3, substitute.

1(23)=(12)31-(2-3) = (1-2)-3 1(1)=(1)3 1 - (-1) = (-1) - 3 1+1=4 1 + 1 = -4 2=42 = -4

Does not hold, throw it out.

Commutativity

xy=yxx-y = y-x

x=1,y=2x=1,y=2, substitute.

12=211-2=2-1 1=1-1=1

Absolutely not, therefore it is thrown out. If you really want to reorder (-) operations you could always transform it into a (+), with a (-) applied to the entire subterm.

ab=a+(b)a-b=a+(-b)

Identity Element

xe=ex=xx-e=e-x=x

I almost had faith, as "x-0" clearly is "x" again. And then "0-x" happens and destroys my hopes. That is probably why commutativity is not taken for granted in the identity element.

Distributive (over addition?)

First of all, we need to fully explain something. The - symbol is actually used in two different ways. One is to indicate subtraction (binary operation), so that 535-3 is simply taking 5 and removing 3 from it (counting down three times). The other is when the - is alone with a term (unary operation), in the case of 5-5. This minus symbol is acting on the whole term it sees (with the same priority as addition/subtraction, but even this is actually contested, therefore it needs extra parentheses to properly define the correct priority). So in

(3+4)-(3+4)

is acting the negative on the entire parentheses, while in

3+4-3+4

is only acting on the 33.

x(y+z)=(xy)+(xz)x-(y+z) = (x-y)+(x-z)

There is so much wrong already, but it's a good exercise to practice. We also have an extra manipulation to do, that we are going to explain separately.

Assumption (expand for why):

(y+z)=yz-(y+z) = - y - z

Weirdly enough, kind of difficult to prove with our current understanding of mathematics. The most naive way to think about it is in the "group of xs" notation, and thinking of what it means to count up y and then z times, and then substitute all of those "s(s(s(..." with the (vaguely mentioned in the previous article) "n(n(n(..." notation, and see that the number of "n" is going to be equal to the number of "s", and that this is equal to count n "y times" down and then "z" times down.

Anyway, let's try to dissect the above operation, now that we have a bit more at our disposal.

x(y+z)=(xy)+(xz)x-(y+z) = (x-y)+(x-z) xyz=xy+xzx-y-z = x-y+x-z x=x+xx = x+x x=0x=0

Aaaand halt. We assumed this should have worked for any value of "x", but our equation firmly assesses that this only works for "x" assuming it's zero. The fun part is that the above equation actually does translate to

0(y+z)=(0y)+(0z)0-(y+z) = (0-y)+(0-z) (y+z)=y+(z)-(y+z) = -y+(-z) (y+z)=yz-(y+z) = -y-z

And this is exactly the extra manipulation we had to learn. Sadly, since it doesn't work firmly on every value of x, we need to throw this away, but remembering we can apply this "negativity" to every symbol inside the subexpression separated by a "+".

And here comes a bit more of a difference. This "unary" minus x-x is different from the "binary" xyx-y. ( In the operation yz-y-z , the first "-" is unary, only affecting y, while the second is the usual subtraction.) Trying again, on our manipulation we assumed:

(y+z)=yz-(y+z) = -y-z (y+z)=(y)z-(y+z) = (-y)-z

Let's add the quantity (y+z)(y+z) to both sides:

((y+z))+(y+z)=((y)z)+(y+z)(-(y+z))+(y+z) = ((-y)-z)+(y+z)

I'm overusing an abundance of parentheses, to show exactly how these operations are added and manipulated. Let's invoke two laws first, we know that "x-x=0" (for any complexity of x subexpression) and commutativity.

(y+z)+((y+z))=((y)z)+(y+z)(y+z)+(-(y+z)) = ((-y)-z)+(y+z) (y+z)(y+z)=((y)z)+(y+z)(y+z)-(y+z) = ((-y)-z)+(y+z) 0=((y)z)+(y+z)0 = ((-y)-z)+(y+z)

Now we can invoke commutativity and associativity (x+y)+z=x+y+z(x+y)+z = x + y + z.

0=(y+z)+((0y)z)0 = (y+z)+((0-y)-z) 0=y+z+(y)z0 = y + z + (-y) - z 0=y+z+(y)+(z)0 = y + z + (-y) + (-z) 0=y+(y)+z+(z)0 = y + (-y) + z + (-z) 0=yy+zz0 = y - y + z - z 0=0+zz0 = 0 + z - z 0=00 = 0

A little bit weird, but we have shown that our "manipulation" assumption, was technically satisfied by thinking of subtraction as addition all along, but with negative particles. And to be fair, subtraction is generally that easy to swap around, as long as that in "a-b", that "-b" particle is added somewhere to the "a" subexpression. "-b+a" is also the same expression. So in a way, our "subtraction" is kind of distributive towards addition, but only if we express it already as an addition with these negative particles.

I am sure this whole paragraph might have been more confusing than anything, but following the laws for addition can make a lot of "substep" equations basically proven (like the above one), and not have to rewrite our equation infinite times, just because we wanted to write "b+a" instead of "a+b".

Try to prove, or just manipulate around to see if they hold (they might not!):

a+b+c=c+(a+b)a+b+c=c+(a+b) ab+c=(a+c)ba-b+c=(a+c)-b b+c=(b+c)-b+c=-(b+c) abc=(a+b+c)-a-b-c=-(a+b+c) b+a=a+b-b+a=-a+b

(So much for "subtraction is quicker...)

Multiplication

Commutativity

xy=yx x*y = y*x

The answer is yes. The group of "xs" notation can prove it visually. If we take the expression "3*2" as "x x x" * "x x", we get the following [xx][xx][xx][x x][x x][x x]. Since there are three boxes containing two xs each, we can also make two groups of x, by only taking the first x first, and the second x after from each of the boxes. That is [xxx][xxx][x x x][x x x], ergo "x x" * "x x x". So no matter how many "x" these two numbers are formed by, the regrouping works, therefore commutativity holds.

Associative

x(yz)=(xy)z=xyzx*(y*z) = (x*y)*z = x*y*z

The answer is yes, but in a more complicated way. Our "group of xs" notation would still work visually, albeit might require an extra dimension to properly visualize. (The amount of xs would be ... "x*y*z", displayed in a "rectangular cuboid... or box", with those x,y,z as dimensions of the three lengths) Visually, it would be as a box made up of Lego, representing our "xs". And the equation would be asking whether summing up every slice from one face, would be the same as summing up all the slices from a different adjacent face.

Inverse operation

Division (/) is the inverse operation to multiplication.

xy=z    z/y=x x*y = z \iff z / y = x

Identity Element

ex(xe=ex=x) \exists e \forall x(x*e = e*x = x)

We only need to find a singular "e" value. (Or more if they exist). We can try intuitively, by knowing that multiplying by one, in the group of xs notation is "x x x" * "x" -> [xxx][x x x]. Since we aren't replacing the "xs" in the second number multiple times, we know we're going to stay with the same number of "xs". Therefore "x" or 1, is a possible value.

The mathematical way requires a bit more equation manipulation, especially with respect to the division, which we haven't seen yet.

Property of zero... ?

ex(xe=ex=e) \exists e \forall x (x * e = e * x = e)

We try with intuition. Substituting e=0e=0 in the above formula, is a valid way to make the equation True, and therefore satisfying the property. There is, philosophically, an element zero, that kind of "deletes" everything, no matter what the other value is.

0x=x0=00*x=x*0=0.

This introduces a complication. In equations we usually say that adding and subtracting a same value from both sides of the equation does not change anything. In case of multiplication, this is NOT True. The reason why is exactly the zero. While we can safely say

4=54=5

to be a False equation, as soon as we multiply both sides by zero, we get:

40=504*0=5*0 0=00=0

And we end up with a True equation. Therefore, if we want an equation's value to stay the same, we need to add a new rule. It is forbidden to manipulate both sides of the equation by multiplying or dividing by zero. Doing so will result in clearly wrong values, and many funny "proofs" can end up proving things like "1=2". (We'll see some after)

Distribution (over addition)

a(b+c)=ab+aca*(b+c) = a*b + a*c

This property holds. In the same unary/group of xs notation, we can think of every "x" within b and c, and replacing them individually with the "xs" contained in a. In both cases, we will have b+cb+c total groups, of "a" amount of "xs".

For example, 2(1+3)2*(1+3) would be:

[xx]("x"+"xxx")[x x]*("x" + "x x x") [xx]("xxxx")[x x]*("x x x x") [xx][xx][xx][xx][x x][x x][x x][x x]

And 21+232*1+2*3:

[xx]"x"+[xx]"xxx"[x x]*"x" + [x x]*"x x x" [xx]+[xx][xx][xx][x x] + [x x][x x][x x] [xx][xx][xx][xx][x x][x x][x x][x x]

And naively we can see these are the same thing. Since in this notation, the number of "xs" of any single number does not really matter, we can tell that the distribution works.

Why is this property important? It allows us to manipulate a lot better all the expressions that mix and match addition and multiplication. Since addition has also associativity, we can treat long strings of "+" with each individual term. As an example:

x(a+b+c+d)=xa+xb+xc+xdx*(a + b + c + d) = x*a + x*b + x*c + x*d

So we can distribute this "x" operation, over every term of the addition. The reverse is also possible, and if finding a similar "x" term being shared, it can be grouped up. For example, a famous manipulation:

aabba*a-b*b

We add zero. And of course, "x-x=0", so if we add a term "x" and also subtract it in the same point, nothing changes in value.

aabb+ababa*a-b*b+a*b-a*b

We see a "a*" and group up the first two terms.

a(a+b)bbaba*(a+b) - b*b - a*b

We also see a "b" opportunity in the last two terms.

a(a+b)+b(ba)a*(a+b) + b*(-b - a)

Now, we see that "-b - a" could be turned as an addition. Previously we saw that (a+b)=ab-(a+b)=-a-b, so if we apply that....

a(a+b)+b((b+a))a*(a+b) + b*(-(b + a))

Now to apply commutativity on the last "+"...

a(a+b)+b((a+b))a*(a+b) + b*(-(a + b))

We are almost there, but there's a pesky "-" before that parentheses that we don't know how to treat. After all, we don't know what counting "negative" times actually means. Simply put 1a=a-1*a = -a, and in general (a)b=(ab)=ab(-a)*b=-(a*b)=-a*b. The negativity spreads throughout the entire term. (a)(b)=(1)(1)ab=ab(-a)*(-b)=(-1)*(-1)*a*b = a*b, since we would end up with "double minus", and we know that's just "+" again, "negative * negative" ends up "positive".

a(a+b)+b(a+b)a*(a+b) + -b*(a + b)

And now we see the sub term "a+b" is common in both sides of the main "+" term. Therefore we group that one:

(ab)(a+b)=aabb(a-b)*(a+b) = a*a - b*b

And therefore, what we ended up we equate back to our start, making both expressions refer to the same value. Either could be useful , so swapping and manipulating expressions like this is important for doing what we refer to as "basic algebra".

Important note: Distribution works also over subtraction, as subtraction can be substituted by addition, and adding a "unary negative" notation. (34=3+(4)3-4=3+(-4))

a(bc)=abaca*(b-c)=a*b-a*c

Can you prove the above? Only using the transformations we've shown?

Division

Finally, we have to talk about division. While it is a similar operation to multiplication, being its inverse, we know the order of operands matters. a/bb/aa/b \neq b/a.

Commutativity is sadly out.

Let's test a random associativity example.

(36/6)/3=36/(6/3)=36/6/3(36/6)/3=36/(6/3)=36/6/3

First expression is resolved by noting 36/6=6    66=3636/6=6 \iff 6*6 = 36.

6/36/3 22

The second example instead asks us to do 6/36/3 first. And since 23=62*3=6, we know that intermediate result is 2.

36/(6/3)36/(6/3) 36/236/2 1818

Since 2 is not equal to 18, we can found a counter example, and can also throw associativity out.

Distribution (over addition)

To demonstrate this one, we'll need to take a leap of faith, and assume that results like 1/x1/x exist, even when x is not 1. That will mean that there's numbers that are not whole, and are as such, not exactly referable to tangible, intuitive quantities. There's still metaphors to explain such numbers, but it's best to think of them for now as "between integers", where it's not quite 3 and not entirely 4, but somewhere in the middle between the two.

Why is this important? Because we can thankfully acquire a new property due to our definition of what division is. Since we know that division is an inverse of multiplication, we can express our divisions as multiplications of "inverse" terms. Just like we did with our negative terms over in addition, we define this:

x/y=x(1/y)x/y=x*(1/y)

We determine that there's a counterpart to any number "y", that is their inverse "1/y", which encodes in a multiplicative sense the concept of division by that number.

What happens when a "y" and "1/y" encounter each other? We can easily find out by using the above definition, and setting that x=yx=y.

y/y=y(1/y)y/y=y*(1/y) 1=y(1/y)1=y*(1/y)

Therefore, when an element gets multiplied with its inverse, we get "1", the neutral element of multiplication.

We saw this exact structure before! Right when talking about addition.

x+(x)=0x+(-x)=0

An element added to its negative is equal to the neutral element of addition. And it does add up with the rest, as we before defined equations like xx=0x-x=0, this is the same equation.

Therefore, when we have a formula like

(x+y)/z=x/z+y/z(x+y)/z = x/z+y/z

we can easily transform it in its relative multiplicative form

(x+y)(1/z)=x(1/z)+y(1/z)(x+y)*(1/z) = x*(1/z)+y*(1/z)

and reuse the distributive property of multiplication over addition.

Identity Element

ex(x/e=e/x=x)\exists e \forall x (x/e = e/x = x)

While division by 1 almost passes the test, as x/1=xx/1=x, the reverse as we demonstrated is not true. So Identity does not hold.

Zero element

ex(x/e=e/x=e) \exists e \forall x (x/e = e/x = e)

Sadly, our notions of x/ex/e and e/xe/x are very incompatible, and any element that we try to put to the test, does not hold one way or the other. If e were to be 1, "x/e" would fail as it would give back "x", not 1. If e were to be 0, "x/e" would also fail, as the operation of division is not defined for dividend (the second term) zero. Every other term fails similarly. (This can be proven by induction too, rather than having to determine every possible number).

By the way, why is "x/0" not defined? (expand for an answer) Let's try to think in terms of our definition.

We defined the division as:

xy=z    z/y=x x*y = z \iff z / y = x

Now, let's try the substitution "y=2" first, to see everything being correct.

x2=z    z/2=x x*2 = z \iff z/2 = x

This means that, if we take our number "z" and divide it by "2", we get a number "x". And that number "x" multiplied by "2" gives us back our starting "z". So, using z=6z=6, we need a number "x" that multiplied by 2 gives us 6. Consulting our chart/multiplication tables, we know 3 does the trick, therefore x=3x=3 and 6/2=36/2=3. (Could have another "value of x" been possible to define?)

Now we test for zero.

x0=z    z/0=x x*0 = z \iff z/0 = x

We try doing the same as before, so we set "z" as "6", and now we're at an impasse. We need to find a number "x" that when multiplied by 0, has a result of 6. But we've proven above, x0=0x*0=0, no matter what "x" we pick! Therefore dividing by zero does not seem to be possible. 6/06/0 remains undefined.

What if we took "z" as 0 then? Maybe we can at least define "0/0"?

We have now the issue that

x0=0    0/0=x x*0 = 0 \iff 0/0 = x

it is true that we can pick x=5x=5 and say, "yes, 50=05*0=0, therefore 0/0=50/0=5". But so does every other number we pick. Whether we pick x=0x=0 or x=11000x=11000, they all work now. Therefore "0/0" is also undefined, because it could represent every possible value. And this is the true reason of why dividing by zero is so problematic, and is disallowed. When multiplying or dividing both sides of the equation by zero, we just change the values of the expressions in an irreparable way, that our exploration of the equation becomes pointless. (That is, if we were wondering whether x=yx=y, multiplying both by 0 would end up in 0=00=0, which, while True, is not what we were originally interested in.)

CC BY-SA 4.0 Anwill. Last modified: January 13, 2025. Website built with Franklin.jl and the Julia programming language.