ePortfolio in Discrete Mathematics

Reflections, Thoughts, Applications, Learnings, and Analysis

HOW DO WE APPLY DISCRETE MATHEMATICS IN REAL LIFE?

I. Counting

Counting is the mathematical process of determining how many ways events or choices can occur, using structured rules.

It's widely used in daily planning, event scheduling, game scoring, and estimating possible outcomes, like computing how many outfit combinations you can wear.

I learned the importance of logical sequencing using tools like tree diagrams and tables, which helped me approach complex tasks step by step.

This topic taught me that even simple counts can become tricky without structure—like in card games where each move has multiple outcomes.

The pigeonhole principle states that if you place more items into fewer containers, at least one container must contain more than one item.

It's useful in proving unavoidable scenarios like shared birthdays in a group or detecting duplicate usernames in a system.

This principle helped me understand how to prove something must exist without finding it directly—powerful in logical problem-solving and cybersecurity checks.

In games, this principle can explain why conflicts or overlaps are inevitable when too many players pick from a limited set of options.

Permutations are ordered arrangements, while combinations are selections where order doesn't matter.

They apply to password generation, sports brackets, group project assignments, and even crafting levels in video games.

Mastering the formulas enhanced my ability to evaluate scenarios efficiently, especially when making decisions under constraints.

Learning when to use permutations vs. combinations improved my accuracy in analyzing options—like determining seating orders or lottery odds.

Binomial coefficients represent how many ways we can choose a subset of items from a larger set, often shown in Pascal’s Triangle.

They are used in expanding algebraic expressions, calculating probabilities, and designing balanced tournament brackets.

I learned to connect patterns in algebra to combinatorics—for instance, how (a + b)^n reflects real-world choice structures.

This connection deepened my appreciation for how mathematical expressions mirror real-life arrangements like team pairings or recipe combinations.

This concept extends permutations to include repeated elements or indistinguishable items.

It’s applied in role assignments, product arrangements with identical items, and chemical molecule structures.

Learning how to adjust for repetition helped me solve more realistic problems—like arranging identical desks in different classrooms.

It reminded me that real-life problems aren't always unique and clean—many involve repeated choices and limited variation.

This refers to creating all possible permutations or combinations, often with algorithms or code.

It’s used in simulations, testing software inputs, puzzle solvers, and game level generation (e.g., generating all possible puzzle boards).

I learned to write programs that list possible outcomes efficiently—crucial for solving optimization and search problems.

By generating possibilities programmatically, I saw how computers explore game paths, AI decision trees, and route planning solutions.

II. Algebraic Structures and Groups

This topic introduces abstract mathematical structures such as groups, which are sets combined with operations that follow specific rules like closure, associativity, identity, and invertibility.

Algebraic structures are used in cryptography (e.g., RSA encryption), error-detecting codes, and digital systems where operations must behave predictably and securely.

For example, the set {0, 1, 2} with addition modulo 3 forms a group: (1 + 2) mod 3 = 0. This shows how structure ensures that results stay within the set.

Learning group properties made abstract problems more approachable, especially when analyzing symmetrical patterns in puzzles or simplifying game mechanics like rotating a Rubik’s cube, which follows group theory.

I now think of problems in terms of relationships and structure. This perspective is powerful in algorithms, computer science, and theoretical physics, where systems often follow strict mathematical rules.

III. Groups, Rings, and Fields

This topic explores algebraic systems that extend groups into rings and fields—structures defined by sets and operations that follow specific rules.

These structures are used in digital circuits, error correction (like QR codes), and cryptographic algorithms such as AES, which rely on finite field arithmetic.

For example, in the field ℤ₅ (integers modulo 5), every non-zero element has a multiplicative inverse: 2 × 3 ≡ 1 mod 5. This allows secure mathematical operations in encryption.

Rings introduce two operations (like addition and multiplication), such as ℤ (integers), where you can add and multiply but not always divide. Fields go further by supporting division—like rational numbers ℚ.

Visualizing these systems helped: for instance, drawing clock faces (modular groups), grids for multiplication tables (rings), and showing symmetry in polygons (group actions).

Understanding this hierarchy showed me how complex systems like computer processors and secure messaging apps rely on structured, rule-based math beneath the surface.

IV. Discrete Probability

Discrete probability deals with the likelihood of outcomes in systems with a finite or countable number of possibilities.

It’s commonly used in games of chance (like dice, cards, and lotteries), simulations, and decision-making where outcomes can be listed and counted.

For example, the probability of rolling a 4 on a six-sided die is 1/6. In a card game, the chance of drawing an Ace from a standard deck is 4/52.

I learned to analyze outcomes and calculate their probabilities, which is key in evaluating risks, game strategies, and data-driven decisions.

It taught me how randomness isn't just chaos—it can be measured and managed using rules and logic.

This formalizes probability through rules like addition, multiplication, and conditional probability.

These principles are used in fields like finance (stock predictions), weather forecasting, and AI algorithms where event dependencies matter.

Example: If the chance of rain is 0.3 and carrying an umbrella if it rains is 0.9, then the chance of both is 0.3 × 0.9 = 0.27 (27%).

Mastering these rules gave me confidence in evaluating compound events and understanding how one probability can affect another.

It made randomness feel systematic, especially when building probability trees or event diagrams.

Bayes Theorem is used to update probabilities based on new evidence.

It’s widely used in spam filters, disease diagnosis (e.g., COVID test accuracy), and machine learning classification models.

Example: If a disease affects 1% of the population and a test is 99% accurate, Bayes Theorem helps find the actual chance of being sick given a positive result—which is often much lower than expected.

I learned how initial beliefs (prior probabilities) shift with new data (evidence), improving decision-making under uncertainty.

This concept changed how I think about evidence—not just whether it's present, but how strongly it supports a conclusion.

Expected value measures the average outcome of a random process, while variance measures how spread out those outcomes are.

These are used in economics (predicting returns), insurance (risk pricing), and games (designing fair or balanced systems).

Example: In a game where you win ₱100 with probability 0.2 and lose ₱20 with probability 0.8, the expected value is (100×0.2) + (-20×0.8) = ₱4. This tells you the average gain per game.

Variance helps measure risk—low variance means outcomes are consistent; high variance means unpredictable results.

It trained me to assess not just what’s likely to happen, but how risky or volatile an outcome might be—important for strategic planning and game theory.

V. Graphs

Graphs are mathematical structures used to model pairwise relationships between objects, often visualized as nodes (vertices) and edges (links).

For example, a social network graph might have 5 people (nodes): Alice, Bob, Carol, Dave, and Eve. If Alice is friends with Bob and Carol, edges connect Alice to Bob and Alice to Carol. Numerically, this can be represented as edges: (Alice–Bob), (Alice–Carol), (Bob–Dave), and (Carol–Eve).

Another example is a road map where intersections are nodes and roads are edges. If you want to travel from node 1 to node 4, the graph helps find routes efficiently.

Understanding graphs helped me model complex systems, like finding shortest paths or analyzing connectivity, which are crucial in network design and analysis.

Graphs make it easier to visualize and solve problems involving connections and relationships.

This covers special graphs such as bipartite graphs (nodes split into two sets), complete graphs (every node connected), and tree graphs (no cycles).

Bipartite graphs apply to matching problems (job assignments), trees model organizational hierarchies or file systems.

Learning these types helped me identify patterns and select the right graph model for different problems.

It improved my problem-solving by clarifying graph properties and constraints.

Graphs can be stored using adjacency matrices or lists, and isomorphism checks if two graphs have the same structure despite different labels.

These are important in database queries, chemical compound comparisons, and pattern recognition.

I learned to represent graphs efficiently and detect structural equivalence, which is useful in optimizing data storage and finding similarities.

Detecting hidden graph similarities opened up new ways to analyze complex data.

Connectivity studies if there is a path between nodes, including connected components and bridges.

Used to evaluate network reliability, such as checking if communication stays intact when some nodes fail.

I learned to find connected parts and critical links, vital for designing robust networks.

This deepened my appreciation of how connectivity affects system stability and function.

Euler paths traverse every edge once; Hamiltonian paths visit every vertex once.

These are applied in route planning (garbage collection routes), puzzles (traveling salesman problem), and game level design.

I learned to identify these paths and their existence conditions, useful in optimizing routes and resource use.

Exploring these paths made traversal challenges more logical and fun.

Finding the shortest path means determining the minimum cost or distance between nodes.

Applications include GPS navigation, delivery logistics, and network routing.

I learned Dijkstra’s algorithm and heuristic methods, which efficiently find shortest paths in weighted graphs.

This topic stood out as highly practical, bridging theory and real-world problem solving.

Planar graphs can be drawn on a plane without edges crossing.

Used in designing circuit boards, maps, and urban planning to avoid conflicts and overlaps.

I learned planarity tests and how to optimize layouts, which aids in spatial design problems.

The visual nature of planarity helped me understand spatial constraints and graph embedding.

Graph coloring assigns colors to nodes so no adjacent nodes share the same color.

This applies to scheduling (avoiding conflicts), map coloring, and register allocation in compilers.

I learned to compute chromatic numbers and apply coloring algorithms to minimize resources or conflicts.

Despite its simplicity, graph coloring revealed deep challenges and practical benefits in optimization.