Set Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Chapter 2

Set theory

2.1 Basic concepts


During the 19th century, mathematicians became increasingly concerned with
the foundations of mathematics and formalizing logic. Late in that century,
Georg Cantor started considering sets and their properties. We have the fol-
lowing informal definition of a set.

Definition 2.1.1 (Set). A set is an unordered collection of objects.

Because a set is unordered, it does not make any sense to say that an
element of a set occurs twice. We generally denote finite sets with curly braces,
such as
{1, 2, 3, 4, 5}.
For large or infinite sets, it helps to use set builder notation. For example,
the even integers can be expressed as

{x | x = 2k, k ∈ Z}.

This should be read as “the set of all x such that x = 2k for some integer k.”
It is the vertical bar | that is interpreted as “such that,” but it may also be
read as “with the property,” or “satisfying,” or “for which,” and probably even
others. Note that some authors (including those of your textbook, prefer to
use a colon : instead of a vertical bar; you should get comfortable with both
notations. We can simplify with shorthand the form of the set of even integers
we wrote above as
{2k | k ∈ Z}.
Also, whenever the elements of your collection lie in some larger set, you should
specify this in the set builder notation. For example, our description of the
even integers really should have specified that x itself is an integer, as in

{x ∈ Z | x = 2k, k ∈ Z}.

Set can contain just about anything, including other sets! For example, the
set A = {0, {1, 2}, 3} has as its elements the integers, 0, 3 as well as the set
{1, 2}. The set A has three elements.

Definition 2.1.2 (Empty Set). The empty set is the set with no elements. In
particular, the empty set is the unique set for which the statement (∀x, x ∈
/ ∅)
is true.

15
16 CHAPTER 2. SET THEORY

Definition 2.1.3 (Subset). A set A is said to be a subset of B, denoted


A ⊆ B, if every element of A is an element of B. Logically, this means

A ⊆ B ⇐⇒ (∀x)(x ∈ A =⇒ x ∈ B).

We also say that A is contained in B, or B contains A.


The logical form of A ⊆ B tells us how to prove it. This statement is
a universally quantified implication (conditional), so to prove it, we pick any
element x of the set A and prove that it is also an element of B.
Theorem 2.1.4. For any sets, A, B, C,
1. ∅ ⊆ A.
2. A ⊆ A.
3. If A ⊆ B and B ⊆ C, then A ⊆ C.
Definition 2.1.5 (Set equality). Two sets A, B are equal if they have exactly
the same elements. Logically, this means

A = B ⇐⇒ (∀x)(x ∈ A ⇐⇒ x ∈ B).

Since a biconditional is equivalent to the conditional in both directions, this


means
A = B ⇐⇒ (A ⊆ B ∧ B ⊆ A).
The above definition tells us we may prove set equality two ways. Either,
we may prove both subset relations, or wee may string together a bunch of if
and only if statements.
Definition 2.1.6 (Power set). Given a set A, we form form the power set of
A, denoted P(A), which is the set containing all subsets of A. In set builder
notation this is:
P(A) = {X | X ⊆ A}.
Example 2.1.7. Suppose A = {1, 2, 3}, then the power set is:

P(A) = ∅, {1}, {2}.{3}.{1, 2}, {2, 3}, {1, 3}, {1, 2, 3} ,




which has 8 elements. We will prove later that if a finite set has n elements,
then its power set has 2n elements.
Remark 2.1.8. By Theorem 2.1.4, we can see that for any set A, the power
set P(A) always contains ∅ and A. Note also that X ∈ P(A) if and only if
X ⊆ A. You will have to be careful not to confuse subsets with elements; the
notions are different.

2.2 Set operations


Now that we have defined sets, let’s remind ourselves that we already know of
a few: N ⊆ Z ⊆ Q ⊆ R ⊆ C, the natural numbers, integers, rational numbers,
real numbers and complex numbers. Having defined sets, we also want to
know how we can combine them to form new sets, which is the purpose of this
section.
Definition 2.2.1 (Union, intersection, difference). Given two sets A, B there
are three basic binary operations we can perform.
2.2. SET OPERATIONS 17

union A ∪ B = {x | x ∈ A ∨ x ∈ B}

intersection A ∩ B = {x | x ∈ A ∧ x ∈ B}

difference A \ B = {x | x ∈ A ∧ x ∈
/ B}

It is straightforward to see that union and intersection are each commuta-


tive and associative since disjunction (∨) and conjunction (∧) are. Moreover,
they distribute over each other for the same reason. In addition, you can think
of set difference as being analogous to negation (¬), a point which will be made
clearer later. These facts are encapsulated in the following theorem which I
encourage you to prove for yourself using if-and-only-if proofs for set equality.

Theorem 2.2.2. The binary operations union and intersection are commuta-
tive, associative and distribute over each other. That is,

A∪B =B∪A and A ∩ B = B ∩ A,


(A ∪ B) ∪ C = A ∪ (B ∪ C) and (A ∩ B) ∩ C = A ∩ (B ∩ C),
(A ∪ B) ∩ C = (A ∩ C) ∪ (B ∩ C) and (A ∩ B) ∪ C = (A ∪ C) ∩ (B ∪ C).

Moreover, with the difference they satisfy a sort of De Morgan’s Laws:

A \ (B ∪ C) = (A \ B) ∩ (A \ C),
A \ (B ∩ C) = (A \ B) ∪ (A \ C).

In addition, the set difference satisfies a sort of double negation elimination,


namely, 
A \ A \ (A \ B) = A \ B.

Definition 2.2.3 (Disjoint). Sets A, B are said to be disjoint if their inter-


section is empty, i.e., A ∩ B = ∅.

Definition 2.2.4 (Set complement). If there is a universe of discourse U which


is specified, and a set A of elements from this universe, we can talk about the
complement of A, denoted Ac , which is defined as U \ A.

The complement satisfies a few nice properties, and it really acts like nega-
tion. So, for example, (Ac )c = A and it satisfies De Morgan’s Laws by virtue
of Theorem 2.2.2, namely,

(A ∪ B)c = Ac ∩ B c ,
(A ∩ B)c = Ac ∪ B c .

In the theorem below we collect a few more facts about set operations,
particularly how they interact with the subset relation.

Theorem 2.2.5. For any sets A, B,

• A⊆A∪B

• A∩B ⊆A

• A∩∅=∅

• A∪∅=A

• if A ⊆ B, then B c ⊆ Ac

• A ∩ Ac = ∅
18 CHAPTER 2. SET THEORY

• A ∪ Ac = U

One other way to make new sets is to make ordered tuples. Remember, sets
are unordered, but it is very useful to have objects with order to them. For
example, when we considered points in the plane in algebra and calculus, we
used to represent them by a pair of real numbers, like (a, b) where a is the
horizontal offset from the origin, and b is the vertical offset. It is clear that
(0, 1) 6= (1, 0), so order matters.

Definition 2.2.6 (Cartesian product). For sets A, B we can form their Carte-
sian product (or just product) which consists of all ordered pairs where the
first component is an element of A and the second component is an element of
B. Symbollically,
A × B := {(a, b) | a ∈ A, b ∈ B}.

Note that your book calls the Cartesian product the “cross product.” This
is simply incorrect; no one refers to it this way. Cross products are either the
operation on three (or seven) dimensional vectors you learned in Calculus, or
they are much more complicated objects that involve group actions (you are
not supposed to know what a group action is). Just never use the term cross
product in place of Cartesian product.
Of course, we can form multiple Cartesian products to get order triples, or
more generally, order n-tuples. For example,

A × B × C = {(a, b, c) | a ∈ A, b ∈ B, c ∈ C}.

When we take the Cartesian product of a set with itself, we use exponential
notation for convenience, i.e.,

A × A × · · · × A = An .
| {z }
n

This is precisely why we use the notation Rn in linear algebra to denote col-
lection of vectors (they are ordered n-tuples of real numbers).
To maintain a grasp of Cartesian products, you should keep in mind the
analogy of rectangles. Consider the open intervals A = (2, 3) and B = (4, 7).
Then since A, B ⊆ R, we see that A × B ⊆ R2 , that is, it lies in the plane, so
we can visualize it! In particular, A × B is the (open) rectangle in the plane
whose x-values are between 2 and 3 and whose y-values are between 6 and
7. So, when you have some property about Cartesian products and you want
to see if and/or why it is true, imagine first that your Cartesian products are
rectangles in the plane. Hopefully this helps your intuition for the following
theorem.

Theorem 2.2.7. Suppose A, B, C, D are sets. Then

• A × (B ∪ C) = (A × B) ∪ (A × C)

• A × (B ∩ C) = (A × B) ∩ (A × C)

• A×∅=∅

• (A × B) ∩ (C × D) = (A ∩ C) × (B ∩ D)

• (A × B) ∪ (C × D) ⊆ (A ∪ C) × (B ∪ D)
2.3. INDEXED FAMILIES OF SETS 19

2.3 Indexed families of sets


Finite operations are useful, but only so far. Of course, restricting attention to
finite operations may be useful when you want to do computation, but often
it is more useful to use more general operations to prove theorems, and only
later come back to find an efficient way to compute.
Definition 2.3.1 (Family, collection). A set whose elements are themselves
sets is often called a family or collection of sets. Technically, we could just
call this a set, but we often use one of these terms in order to aid our intuition
and memory. Generally, we will use script letters, such as A, B, C, . . . to denote
families of sets. Again, this is not necessary, merely a helpful practice.
Definition 2.3.2 (Arbitrary union). If A is a family of sets, the union over
A is the set whose elements are in at least one of the sets in A. Logically, this
means [
x∈ A ⇐⇒ (∃A ∈ A)(x ∈ A).
A∈A

Definition 2.3.3 (Arbitrary intersection). If A is a family of sets, the inter-


section over A is the set whose elements are in all of the sets in A. Logically,
this means \
x∈ A ⇐⇒ (∀A ∈ A)(x ∈ A).
A∈A

Note that if B ∈ A, then


\ [
A⊆B⊆ A.
A∈A A∈A

Example 2.3.4. Let A be the collection {An | n ∈ N} where An := {1, 2, . . . , n}.


Then \ [
A = {1} A = N.
A∈A A∈A

In the above example, notice how we used N as a tool to specify our collec-
tion A using set builder notation. This is a common phenomenon and we give
it a name. We call a set an index set if it plays a role similar to that of N in
the previous example. Moreover, we say that it indexes the family A, and we
call A an indexed family. This is because we can access (or lookup) any set
in the family A using an element of the index set. We call an elemenet of the
index set an index (the plural of this is indices).
With this concept, we can rewrite the union and intersection with slightly
different notation, which is often more useful:
[ [ \ \
An := A An := A
n∈N A∈A n∈N A∈A

In fact, when the index set is N or some contiguous string of integers, we often
write the union and intersection with notation similar to summation notation
from Calculus. That is,

[ [
An := An ,
n=1 n∈N

and similarly,
k
[
An := A1 ∪ A2 ∪ · · · ∪ Ak .
n=1
And similarly for intersections.
20 CHAPTER 2. SET THEORY

Definition 2.3.5 (Pairwise disjoint). An indexed family A = {Aα | α ∈ I} of


sets is said to be pairwise disjoint if for any α, β ∈ I with α 6= β, Aα ∩Aβ = ∅,
i.e., sets corresponding to different indices are disjoint.

2.4 Mathematical Induction


“Wait, induction? I thought math was deductive?” Well, yes, math is deductive
and, in fact, mathematical induction is actually a deductive form of reasoning;
if that doesn’t make your brain hurt, it should. So, actually, mathematical
induction seems like a misnomer, but really we give it that name because it
reminds us of inductive reasoning in science.
I like to think of mathematical induction via an analogy. How can I convince
you that I can climb a ladder? Well, first I show you that I can climb onto
the first rung, which is obviously important. Then I convince you that for any
rung, if I can get to that rung, then I can get to the next one.
Are you convinced? Well, let’s see. I showed you I can get to the first rung,
and then, by the second property, since I can get to the first rung, I can get to
the second. Then, since I can get to the second rung, by the second property,
I can get to the third, and so on and so forth. Thus I can get to any rung.
Theorem 2.4.1 (Principle of Mathematical Induction (PMI)). If S ⊆ N with
the properties:
1. 1 ∈ S,
2. for all n ∈ N, if n ∈ S, then n + 1 ∈ S,
then S = N. A subset of N is called inductive if it has the second property
listed above. An inductive set is some tail of the set of natural numbers (i.e. it
is the natural numbers less some initial segment)
If you are wondering about a proof of this theorem, then stop. We should
actually call it a theorem, but instead, the definition of the natural numbers,
but that’s not important.
The cool thing about induction (we will henceforth drop the formality of
“mathematical induction”) is that it allows us to prove infinitely many state-
ments. How does it do this? Suppose we have a proposition P (n) for each
natural number n ∈ N and we want to prove that for all n ∈ N, the statement
P (n) is true. Well, we prove two things:
1. P (1) (the base case), and
2. (∀n ∈ N) P (n) =⇒ P (n + 1) (the inductive step).


Then the set S = {n ∈ N | P (n) is true} satisfies the conditions of Theo-


rem 2.4.1 and so S = N.
Example 2.4.2. Suppose we want to prove that for every n ∈ N, 3 divides
n3 − n. Here, our proposition P (n) is that 3 divides n3 − n, and we want to
prove (∀n ∈ N)(P (n)).
We start by proving the base case. Note that 13 − 1 = 0 = 3 · 0, so the
claim holds when n = 1.
We now prove the inductive step. Let n ∈ N be an arbitrary integer and
suppose that 3 divides n3 − n, so that n3 − n = 3k for some k ∈ Z. Then Now
notice that

(n + 1)3 − (n + 1) = (n + 1) (n + 1)2 − 1

2.5. WELL-ORDERING AND STRONG INDUCTION 21

= n(n + 1)(n + 2)
= n3 + 3n2 + 2n
= (n3 − n) + 3n2 + 3n
= 3(k + n2 + n)

Thus 3 divides (n + 1)3 − (n + 1).


By the Principle of Mathematical Induction (PMI), for every n ∈ N, 3
divides n3 − n.
Induction also allows us to define infinitely many things at the same time.
For example, consider the function f (n) = n2 −n+2. We will define a sequence
of numbers an by:
a1 = 0, an+1 = f (an ).
For those of you familiar with computer science or programming, you may
think of this as a recursively defined sequence. Induction and recursion are
two sides of the same coin; we won’t address the difference here.
Definition 2.4.3 (Strong induction).

2.5 Well-Ordering and Strong Induction


In this section we present two properties that are equivalent to induction,
namely, the well-ordering principle, and strong induction.
Theorem 2.5.1 (Strong Induction). Suppose S is a subset of the natural
numbers with the property:

(∀n ∈ N) {k ∈ N | k < n} ⊆ S =⇒ n ∈ S .

Then S = N.
Proof. We prove by induction that for every n ∈ N, {1, . . . , n}
Base case. Notice that by taking n = 1, we see that {k ∈ N | k < 1} = ∅
which is clearly a subset of S. Therefore, by the property of S, we find that
1 ∈ S, so {1} ⊆ N.
Inductive step. Let n ∈ N be arbitrary and suppose that {1, . . . , n} ⊆ S. This
is the same as the set {k ∈ N | k < n + 1}, so by the property of S, n + 1 ∈ S.
Therefore, {1, . . . , n + 1} ⊆ S.By induction, for every n ∈ N, {1, . . . , n} ⊆ S.
Hence [
N= {1, . . . , n} ⊆ S.
n∈N

We already knew S ⊆ N, so they must in fact be equal.


Theorem 2.5.2 (Well-Ordering Principle). Every nonempty subset of the nat-
ural numbers has a least element.
Proof. Let S be a subset of natural numbers with no least element. Note that
1∈/ S (i.e., 1 ∈ S c ) since 1 is the smallest natural number. Now let n ∈ N be
arbitrary and suppose {1, . . . , n} ⊆ S c . Therefore S ⊆ {n + 1, n + 2, . . .} =
{k ∈ N | k ≥ n + 1}. Thus n + 1 ∈ / S because otherwise it would be a least
element. Hence {0, . . . , n + 1} ⊆ S c . By Strong Induction, S c = N and hence
S = ∅. By contraposition, the desired result follows.
Theorem 2.5.3. The well-ordering principle implies the principle of mathe-
matical induction.
22 CHAPTER 2. SET THEORY

Proof. Suppose N has the well-ordering principle. Let S ⊆ N be any subset


with the property that 1 ∈ S and for every n ∈ N, n ∈ S implies n + 1 ∈ S.
We wish to prove that S = N.
Suppose to the contrary that S 6= N. Then S c 6= ∅, and so by the well-
ordering principle has a least element k ∈ S c . Since 1 ∈ S, k 6= 1, so k − 1 ∈ N.
Moreover, we must have k − 1 ∈ S since k is the minimal element of S c . By
the property assumed by S for the value n = k − 1, we find k ∈ S which is a
contradiction.
Therefore, our assumption that S 6= N was false, and hence S = N. In
other words, N has the principle of mathematical induction.

We now recall the division algorithm, but we can provide a proof this time.

Theorem 2.5.4 (Division Algorithm). For any integers a, b with a 6= 0, there


exists unique integers q and r for which

b = aq + r, 0 ≤ r < |a| .

The intger b is called the dividend, a the divisor, q the quotient, and r the
remainder.

Proof. Let a, b ∈ Z with a nonzero. For simplicity, we will assume that a > 0
because the proof when a < 0 is similar. Consider the set of integers A =
{b − ak | k ∈ Z, b − ak ≥ 0. This set is clearly nonempty, for if b ≥ 0 then
b − a0 = b ≥ 0 is in A, and if b < 0 then b − ab = b(1 − a) ≥ 0 is in A.
By the Well-Ordering Principle, A has a minimum element, which we call
r, and some integer which we call q so that r = b − aq. Then r ≥ 0 and notice
that r − a = b − aq − a = b − a(q + 1). Since a > 0, r − a < r. By the minimality
of r, we know r − a < 0 or equivlaently, r < a.
Now suppose there are some other integers q 0 , r0 with 0 ≤ r0 < a and
b = aq 0 + r0 . Then aq + r = aq 0 + r0 and hence r − r0 = aq − aq 0 = a(q − q 0 ).
Now −a < −r0 ≤ r−r0 ≤ r < a and hence a |q − q 0 | = |a(q − q 0 )| = |r − r0 | < a.
Dividing through by a, we obtain |q − q 0 | < 1 and since q − q 0 ∈ Z it must be
zero. Hence q = q 0 and so also r = r0 .

Lemma 2.5.5. If a, b ∈ Z are nonzero and relatively prime and if a, b divide


c, then ab divides c.

Proof. Let a, b be nonzero relatively prime integers which divide some integer
c. Since a, b divides c there exist integers r, s for which c = ar = bs. Since
gcd(a, b) = 1, by Theorem 1.7.5 there exist integers, x, y for which ax + by = 1.
Multipliying by c and using the division properties above we find

c = acx + bcy = absx + abry = ab(sx + ry),

so ab divides c.

2.6 Principles of counting


It is often useful in both pure and applied mathematics to count the sizes of
finite sets. In this section we prove some basic theorems of this sort. If A is
a set, let |A| denote the number of elements in A. Note that ∅ is a finite set
with |∅| = 0.
2.6. PRINCIPLES OF COUNTING 23

Theorem 2.6.1 (Sum Rule). If A, B are disjoint finite sets then |A ∪ B| =


|A| + |B|. More generally, if A1 , . . . , An is a pairwise disjoint collection of
finite sets, then
[n n
X
Aj = |Aj | .
j=1 j=1

Proof. Obvious, but we omit the technical proof until we have a proper dis-
cussion of the definition of the number of elements in a set, which won’t occur
until Chapter 5.

Theorem 2.6.2. For finite sets A, B, which are not necessarily disjoint,

|A ∪ B| = |A| + |B| − |A ∩ B| .

Proof. Obvious when you look at a Venn diagram, but we omit the technical
proof for similar reasons.

Theorem 2.6.3 (Inclusion–Exclusion Principle). Let A1 , . . . , An be a collec-


tion of finite sets. Then
 
[n n
X X \j
Aj = (−1)j+1 An k  .
j=1 j=1 1≤n1 <···<nj ≤n k=1

In the above formula the sum is taken over all subcollections of j different sets
from among A1 , . . . , An . In particular, when n = 3, the above becomes

|A1 ∪ A2 ∪ A3 | = |A1 | + |A2 | + |A3 |

− |A1 ∩ A2 | + |A1 ∩ A3 | + |A2 ∩ A3 |
+ |A1 ∩ A2 ∩ A3 |

Proof. By induction and using Theorem 2.6.3.

Theorem 2.6.4 (Product Rule). If A, B are disjoint finite sets then |A × B| =


|A| · |B|. More generally, if A1 , . . . , An is a pairwise disjoint collection of finite
sets, then
n
Y
|A1 × A2 × · · · × An | = |Aj | .
j=1

Proof. Using induction and the Sum Rule.

Definition 2.6.5 (Permutation (combinatorics)). A permutation of a finite


set is an arrangement of the elements in a particular order.

For the next theorem, recall that the factorial of a positive integer n is
defined inductively by

0! = 1
n! = n(n − 1)!

Equivalently, n! = n(n − 1)(n − 2) · · · 1.

Theorem 2.6.6. The number of permutations of a set with n elements is n!

Proof. By induction on the number of elements in the set.


24 CHAPTER 2. SET THEORY

Theorem 2.6.7. If n ∈ N and r ∈ Z with 0 ≤ r ≤ n, then the number of


permutations of any r distinct objects from a set of n objects is
n!
= n(n − 1)(n − 2) · · · (n − r + 1).
(n − r)!

Proof. By induction on r and Theorem 2.6.6.


Definition 2.6.8 (Combination). For n ∈ N and r ∈ Z with 0 ≤ r ≤ n, a
combination of r elements from a set of size n is just a subset of size r.
The number of combinations is called the binomial coefficient and is
denoted nr . We read this symbol as “n choose r.”


Theorem 2.6.9 (Combination Rule). We have the following formula for the
binomial coefficients:  
n n!
= .
r r!(n − r)!
Proof. Note that the number of permutations of any r distinct objects from a
set of n objects is necessarily the number of subsets of size r (i.e., the num-
ber of combinations, the binomial coefficient) times the number of ways to
arrange those r elements (i.e., permutations of a set of size r). Therefore, by
Theorem 2.6.6 and Theorem 2.6.7, we have
 
n n!
r! =
r (n − r)!

thereby proving the result.

Note: the above theorem guarantees n n


.
 
r = n−r

Theorem 2.6.10 (Binomial Theorem). If n ∈ N and a, b ∈ R, then


n  
X n
(a + b)n = ar bn−r .
r=0
r

Proof. By induction and using the fact that when r ≥ 1,


     
n n−1 n−1
= + ,
r r r−1

which you should prove. This displayed equation essentially says that Pascal’s
triangle generates the binomial coefficients.

You might also like