Fundamental Concepts of Abstract Algebra

Fundamental Concepts of Abstract Algebra

by Gertrude Ehrlich


$22.46 $24.95 Save 10% Current price is $22.46, Original price is $24.95. You Save 10%.
View All Available Formats & Editions
Choose Expedited Shipping at checkout for guaranteed delivery by Friday, August 23


Designed to offer undergraduate mathematics majors insights into the main themes of abstract algebra, this text contains ample material for a two-semester course. Its extensive coverage includes set theory, groups, rings, modules, vector spaces, and fields. Loaded with examples, definitions, theorems, and proofs, it also features numerous practice exercises at the end of each section.
Beginning with sets, relations, and functions, the text proceeds to an examination of all types of groups, including cyclic groups, subgroups, permutation groups, normal subgroups, homomorphism, factor groups, and fundamental theorems. Additional topics include subfields, extensions, prime fields, separable extensions, fundamentals of Galois theory, and other subjects.

Product Details

ISBN-13: 9780486485898
Publisher: Dover Publications
Publication date: 12/14/2011
Series: Dover Books on Mathematics Series
Pages: 352
Product dimensions: 6.22(w) x 8.82(h) x 0.68(d)

About the Author

Gertrude Ehrlich is a Professor Emerita in the Department of Mathematics at the University of Maryland, College Park.

Read an Excerpt

Fundamental Concepts of Abstract Algebra

By Gerturde Ehrlich

Dover Publications, Inc.

Copyright © 1991 Gertrude Ehrlich
All rights reserved.
ISBN: 978-0-486-29186-4



The following historical remarks are intended to be read at the outset of your study, and then reread from time to time as your knowledge of the subject increases.



Abstraction Is Power

Much of the power of mathematics stems from its abstractness, which is the source of its universality. Mathematics was abstract from its very beginnings in prehistoric times. Primitive humans, confronted (through millions of years) with examples of sets such as these, gradually learned to abstract from the differences of such sets, and to recognize a common property: "threeness." In this way, through many acts of abstraction, they created the natural numbers. The abstract concepts 1, 2, 3, 4, 5, 6, ... enabled them to deal more efficiently with concrete problems such as: if to [ILLUSTRATION OMITTED], I add [ILLUSTRATION OMITTED], how many cows will I have? If to , I add , how many stones? For, they could now solve, once and for all, the problem "2 + 3 =?", and the number of resulting objects would be the same, no matter whether the sets being combined consisted of cows or stones.

Such acts of abstraction, resulting in greater universality and in increased power to solve problems, pervade the history of mathematics. We shall make them our central theme as we develop some of the basic concepts of that portion of mathematics now known as abstract algebra.

Ancient Origins of Algebra

Algebra had its origin in ancient times. The earliest known written record of problems we would now classify as algebraic is contained in Babylonian tablets and Egyptian papyri dating from around 1700 B.C. The solution of quadratic equations was known to the Babylonians at that time, and is believed also to have been known in ancient China. Based on Hindu and Greek sources, Mohammed ibn Mûsâ al-Khowârizmî of Baghdad, in 825 A.D., wrote a textbook called Al-jebr, which, following its translation during the twelfth century, was to have considerable influence on the subsequent development of algebra in Europe. (The word algebra is derived from the title of this book, and the word algorithm from its author's name.)

Polynomial Equations Through the Sixteenth Century

The basic problem of classical algebra was the solution of polynomial equations. Since the general linear and quadratic equations had been solved in pre-Christian times, the algebraic problem awaiting consideration through the Dark Ages and the Middle Ages was the solution of the general polynomial equation of degree three or more. The intellectual revival in Renaissance Italy brought about renewed interest in this problem and, around 1510, Scipione del Ferro solved the general cubic. This solution was rediscovered in 1535 by Niccola Tartaglia. In 1540, Lodovico Ferrari succeeded in solving the general quartic. Solutions of both the cubic and the quartic were published in 1545 by Girolamo Cardano in his expository work Ars Magna. For this reason, the formulas for solving the general cubic are often referred to as Cardan's formulas.

The Nineteenth Century Breakthrough: Abel, Galois, and Groups

Thus, by the middle of the sixteenth century, formulas were known for expressing the roots of the general polynomial equation of degree n ≤ 4 in terms of the coefficients of the equation, using finitely many elementary algebraic operations and root extractions. Put more briefly, it was known how to solve the general equation of degree n ≤ 4 by radicals. The question remaining open was: Can the general polynomial equation of degree n ≥ 5 be solved by radicals? Little progress was made on this problem until the late eighteenth and early nineteeth centuries. The work of Joseph Louis Lagrange (1736–1813) on the quintic, the proof by Karl Friedrich Gauss (1777–1855) in 1799 of the Fundamental Theorem of Algebra (which states that every polynomial equation with complex coefficients has a root in the complex number field), and the discovery by Augustin Louis Cauchy (1789–1857) of permutation groups set the stage for the great breakthrough. During the first third of the nineteenth century, two young geniuses, both tragically short-lived, settled the question of solvability by radicals for equations of degree n ≥ 5. In 1824, the Norwegian Niels Henrik Abel (1802–1829), proved that the general polynomial equation of degree n ≥ 5 is not solvable by radicals. Independently, a very young Frenchman, Evariste Galois (1811–1832), discovered the general theory of solvability for polynomial equations (now known as Galois Theory) which implies, in particular, the unsolvability of the general polynomial equation of degree n ≥ 5 by radicals. The essential feature of Galois' discovery was the association between a polynomial and a certain group of permutations on its roots.

The significance of this discovery lies far beyond the solution of one ancient problem. Indeed, little use is made of the existing formulas for solving the general cubic and quartic, and no one seriously laments the non-existence of formulas for solving the general equation of degree n ≥ 5. For most practical purposes, there are numerical methods quite adequate to the task of approximating the roots of such equations to any desired degree of accuracy.

The Birth of Abstract Algebra: Further Uses of Groups

The significance of Galois' discovery lies in the introduction of the group concept as an important tool in mathematics. Abstract algebra was born at that time! After Galois' results had finally been published by Liouville in 1846, the use of the group concept began to spread within mathematics. In 1872, Felix Klein (1849–1925), published his Erlanger Program, proposing to formulate all of geometry as the study of invariants under groups of transformations. In 1893, Marius Sophus Lie (1842–1899), published his three-volume work on continuous groups of transformations. His theory forms a fundamental part of the theory of continuous functions. As time progressed, group theory thus found its way into geometry, analysis, topology, and other areas of mathematics where it made profound contributions. During the twentieth century, the use of group theory began to transcend the boundaries of mathematics, as groups became essential tools in such diverse fields of physics as crystallography, quantum mechanics, and elementary particle theory.

Other Algebraic Structures and Their Uses

Abstract algebra deals with algebraic structures. Simplest among the important algebraic structures is the group. A group consists of a set and a binary operation defined on the set, subject to certain requirements. All other important algebraic structures are basically groups in which additional operations or relations interact with the group operation. Among them are rings (including fields), linear (or vector) spaces, and algebras. These structures were introduced during the late nineteenth and early twentieth centuries. Abstract algebra as we now know it was profoundly influenced by such mathematicians as Richard Dedekind (1831–1916), who was first to formulate the notion of an ideal; Ernst Steinitz (1871–1928), who developed the abstract theory of fields; and Emmy Noether (1882–1935), whose contributions to abstract ideal theory and non-commutative algebras set the tone for algebraic research during this century.

The impact of abstract algebra on other areas of mathematics as well as on physical sciences was not confined to groups. For example, normed linear spaces (i.e., linear spaces in which a notion of length is defined) form the basis of modern (functional) analysis. A particular kind of normed linear space, known as a Hilbert space, plays an important role in modern physics. Galois Theory itself has found application in algebraic coding theory which is used in the construction of error-correcting codes for electronic communication systems.

The Science and Art of Mathematics

As exemplified by the history of algebra, the history of mathematics tends to advance from specific problems to the generalizations arising from their solution. The greater abstraction thus attained makes possible the solution of wider classes of specific problems. As mathematics develops, the pure mathematics of one era often becomes applied mathematics—sometimes decades or even centuries later. It would, however, be a mistake to conclude that the sole justification for mathematical research is the probable future applicability of its results to science or technology. Mathematics is a science in its own right, as well as an art, fueled by the desire to discover, and by the urge to create.

Recommended reading: [20], [21], [22], [25], [26].


Sets, Relations, and Functions

Our introduction to set theory will be largely informal. We leave the term set undefined, remarking only on its intuitive content: a collection, an aggregate, a bunch of objects. These objects, the members of a set, are called its elements. We write "a [member of] A" to signify that a is an element of the set A. (A set consisting of just one element is sometimes called a singleton; a set consisting of just two elements is called a pair.) It proves convenient to include among sets an empty set: a set without elements. Two sets are equal if they consist of the same elements. Equivalently, two sets are equal if no element of either set fails to be in the other. As a consequence, there cannot be more than one empty set—for, if A and B are both empty sets, than neither contains an element which fails to be in the other.

Notation: The symbol "[empty set]" denotes the empty set.

If A and B are sets, then B is a subset of A if every element of B is an element of A. We write "B [subset] A " to signify that B is a subset of A.

It is easy to verify the following statements (see Exercise 1.2.2):

(1) [empty set] [subset] A for any set A;

(2) A [subset] A for any set A;

(3) If A, B are sets such that A [subset] B and B [subset] A, then A = B;

(4) If A, B, C are sets such that A [subset] B and B [subset] C, then A [subset] C.

Notation: We use braces to specify the elements of a set A, or of a subset B of A, either by listing the elements explicitly, or by stating a condition the elements must satisfy.

Example 1:

{1, 2, 3} is the set consisting of 1, 2, 3.

Example 2:

{x [member of] |x > 0} is the set of all positive integers. For any two sets A and B, we denote by "A\B" the set {x [member of] A|x [not member of] B}. In particular, if B [subset] A, then A\B is called the complement of B in A.

The elements of a set may themselves be sets. In such a case, it is important not to confuse the relations of set membership ([member of]) and set inclusion ([subset]). If we think of the United Nations as a set of nations, and of each nation as a set of citizens, then the United Nations is a set whose elements are the member nations. Thus, for example, U.S. [member of] U.N, but U.S. [not subset] U.N. Also, if Jones is a U.S. citizen, then Jones [member of] U.S., but Jones [not member of] U.N. (This illustrates that, unlike [subset], [member of] is not transitive.)

Every set, A, has a power set, PA: the set whose elements are the subsets of A. For example, if A = {1, 2, 3}, then

PA = {[empty set], {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}.

It is easy to see that a finite set of n elements (n ≥ 0) has 2n subsets (see Exercise 1.2.5). (For this reason, the power set of a set A is often denoted by "2A".)

In discussing sets, we generally assume a "universe of discourse"—a set U to which the elements of all sets under discussion belong.

Now let C be a non-empty set of sets. Then the union of the sets in C is the set defined by


It consists of all elements (in U) which belong to any one of the sets of C.

The intersection of the sets in C is the set defined by


It consists of all elements (in U) which belong to all of the sets of C.


The union of two sets A and B (see Figure 2) is denoted by "A [union] B," and their intersection by "A [intersection] B."

Given non-empty sets A and B, we can form the set A × B consisting of all ordered pairs (a, b), where a [member of] A, b [member of] B. One can give formal definitions of ordered pair—for example, the definition due to N. Wiener: (a, b) = {{a}, {a, b}}—which, in effect, selects two elements, a [member of] A and b [member of] B, and then distinguishes one of them, say, a, as being "it" in some sense, notationally designated as "first." This makes it possible to prove the most important fact about ordered pairs: (a1, b1) = (a2, b2) if and only if a1 = a2 and b1 = b2. Another approach (which we adopt here) is to introduce ordered pairs as undefined entities, merely postulating that (a1, b1) = (a2, b2) if and only if a1 = a2and b1 = b2.

The set A × B is called the Cartesian product of A and B, exemplified readily by the Cartesian plane R × R (see Figure 3).

Other graphic examples of Cartesian products are the sets R × Z, Z × R, Z × Z, which we ask you to visualize presently. For further practice, let A = {1, 2}, B = {1, 2, 3}. Write out the elements of A × A, A × B, and B × B.

The notions of "ordered pair" and "Cartesian product of two sets" can be generalized: for finitely many sets A1, ..., An (n ≥ 1), we introduce ordered n-tuples (a1, ..., an), postulating that

(a1, ..., an) = (b1, ..., bn)

(ai, bi [member of] Ai for i = 1, ..., n)

if and only if ai = bi for each i = 1, ..., n.

The set of all ordered n-tuples (a1, ..., an) (ai [member of] Ai, i = 1, ..., n) is the Cartesian product A1 × ... × An of the sets A1,..., An.

Binary Relations

Let A be a non-empty set. A binary relation on A selects certain ordered pairs from A × A. (For example, the relation < on Z selects the ordered pairs (a, b) such that a< b (a, b [member of] Z).) For simplicity, we identify each binary relation with the set of ordered pairs it selects:

Definition 1.2.1: If A is a non-empty set and R is a non-empty subset of A × A, then R is a binary relation on A.


Excerpted from Fundamental Concepts of Abstract Algebra by Gerturde Ehrlich. Copyright © 1991 Gertrude Ehrlich. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

1. Preliminaries
2. Groups
3. Rings, Modules, and Vector Spaces
4. Fields

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews