# If You Can Add, Then You Can Math

Mathematics is fundamentally based on putting numbers together: addition.

Subtraction, multiplication, and division, and all functions and algorithms – no matter how complex – are based on what is essentially addition.

-Subtraction is addition of a negative number.
-Multiplication is addition of the same number a given amount of iterations.
-Division is inverse multiplication, which is basically deducing the number of iterations a given number need be added.

This is applicable to more complex mathematical processes, such as exponents and roots, logarithms,
derivatives and integrals, and matrices. All of these esoteric mathemagics are based on multiplication, and thus addition.
They are simply complex. A simple thing (addition) is iterated and manipulated so fervently it’s lost in the fray. Upon dissection, apparently sophisticated processes are really just conglomerates of individual arithmetic operations.

Proof by example: Computers use transistors to form logic gates that form modules that in essence perform arithmetic. Arithmetic, as shown above, is fundamentally addition. Computers can solve any math procedures we can fathom and code, and so show the extrapolation of addition.

Caveat: trigonometry and it’s cyclical applications. These types of mathematics are geometric ratios and angles based on a circle (the unit circle with radius 1); and can be used for frequent and periodic descriptions (such as waves or rotations). I feel you can tease out the fundamental arithmetic of addition somehow, but it’s beyond me to discern where angles (whether radians or degrees) are derived and if addition is indeed inherent to trigonometric functions. Regardless, I assume addition is in the details as computers can solve trigonometric functions.

– – –