Simplicity vs Complexity

Complexity eats a lot of time and resources.
The amount of time and resources increases exponentially with the complexity of the system.
Complexity can be measured by lines of code, levels of hierarchy and amount of tools used.
Linear is much better than nested.
Simpler system takes much less brain power to understand the logic what also increases developer efficiency and reduces amount of issues due to being less tired and more possible issues.
Simpler is much easier to debug and to maintain and to extend.

Flexbox responsive columns

html:

<div class="row">
  <div class="col-6 col-sm-12">
    Left column. 50% on large screens. 100% on smaller screens.
  </div>
  <div class="col-6 col-sm-12 col-right">
    Right column. 50% on large screens. 100% on smaller screens. Has different background for visual difference.
  </div>
</div>

css:

.row {
  display: flex;
  flex-wrap: wrap;
}

.col-6 {
  flex: 0 0 50%;
  background-color: #eee;
}

.col-right {
  background-color: #dde;
}

@media (max-width: 576px) {
  .col-sm-12 {
    flex-basis: 100%;
  }
}

Floating Point Arithmetic

Floating-point numbers are represented in computer hardware as base 2 (binary)
fractions. For example, the decimal fraction

0.125

has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction

0.001

has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only
real difference being that the first is written in base 10 fractional notation,
and the second in base 2.

Unfortunately, most decimal fractions cannot be represented exactly as binary
fractions. A consequence is that, in general, the decimal floating-point
numbers you enter are only approximated by the binary floating-point numbers
actually stored in the machine.

The problem is easier to understand at first in base 10. Consider the fraction
1/3. You can approximate that as a base 10 fraction:

0.3

or, better,

0.33

or, better,

0.333

and so on. No matter how many digits you’re willing to write down, the result
will never be exactly 1/3, but will be an increasingly better approximation of
1/3.

In the same way, no matter how many base 2 digits you’re willing to use, the
decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base
2, 1/10 is the infinitely repeating fraction

0.0001100110011001100110011001100110011001100110011...

Stop at any finite number of bits, and you get an approximation. On most
machines today, floats are approximated using a binary fraction with
the numerator using the first 53 bits starting with the most significant bit and
with the denominator as a power of two. In the case of 1/10, the binary fraction
is 3602879701896397 / 2 ** 55 which is close to but not exactly
equal to the true value of 1/10.

Many users are not aware of the approximation because of the way values are
displayed. Python only prints a decimal approximation to the true decimal
value of the binary approximation stored by the machine. On most machines, if
Python were to print the true decimal value of the binary approximation stored
for 0.1, it would have to display

>>>

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625

That is more digits than most people find useful, so Python keeps the number
of digits manageable by displaying a rounded value instead

>>>

>>> 1 / 10
0.1

Just remember, even though the printed result looks like the exact value
of 1/10, the actual stored value is the nearest representable binary fraction.

Interestingly, there are many different decimal numbers that share the same
nearest approximate binary fraction. For example, the numbers 0.1 and
0.10000000000000001 and
0.1000000000000000055511151231257827021181583404541015625 are all
approximated by 3602879701896397 / 2 ** 55. Since all of these decimal
values share the same approximation, any one of them could be displayed
while still preserving the invariant eval(repr(x)) == x.

Historically, the Python prompt and built-in repr() function would choose
the one with 17 significant digits, 0.10000000000000001. Starting with
Python 3.1, Python (on most systems) is now able to choose the shortest of
these and simply display 0.1.

Note that this is in the very nature of binary floating-point: this is not a bug
in Python, and it is not a bug in your code either. You’ll see the same kind of
thing in all languages that support your hardware’s floating-point arithmetic
(although some languages may not display the difference by default, or in all
output modes).

For more pleasant output, you may wish to use string formatting to produce a limited number of significant digits:

>>>

>>> format(math.pi, '.12g')  # give 12 significant digits
'3.14159265359'

>>> format(math.pi, '.2f')   # give 2 digits after the point
'3.14'

>>> repr(math.pi)
'3.141592653589793'

It’s important to realize that this is, in a real sense, an illusion: you’re
simply rounding the display of the true machine value.

One illusion may beget another. For example, since 0.1 is not exactly 1/10,
summing three values of 0.1 may not yield exactly 0.3, either:

>>>

>>> .1 + .1 + .1 == .3
False

Also, since the 0.1 cannot get any closer to the exact value of 1/10 and
0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with
round() function cannot help:

>>>

>>> round(.1, 1) + round(.1, 1) + round(.1, 1) == round(.3, 1)
False

Though the numbers cannot be made closer to their intended exact values,
the round() function can be useful for post-rounding so that results
with inexact values become comparable to one another:

>>>

>>> round(.1 + .1 + .1, 10) == round(.3, 10)
True

Binary floating-point arithmetic holds many surprises like this. The problem
with “0.1” is explained in precise detail below, in the “Representation Error”
section. See The Perils of Floating Point
for a more complete account of other common surprises.

As that says near the end, “there are no easy answers.” Still, don’t be unduly
wary of floating-point! The errors in Python float operations are inherited
from the floating-point hardware, and on most machines are on the order of no
more than 1 part in 2**53 per operation. That’s more than adequate for most
tasks, but you do need to keep in mind that it’s not decimal arithmetic and
that every float operation can suffer a new rounding error.

While pathological cases do exist, for most casual use of floating-point
arithmetic you’ll see the result you expect in the end if you simply round the
display of your final results to the number of decimal digits you expect.
str() usually suffices, and for finer control see the str.format()
method’s format specifiers in Format String Syntax.

For use cases which require exact decimal representation, try using the
decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.

Another form of exact arithmetic is supported by the fractions module
which implements arithmetic based on rational numbers (so the numbers like
1/3 can be represented exactly).

If you are a heavy user of floating point operations you should take a look
at the Numerical Python package and many other packages for mathematical and
statistical operations supplied by the SciPy project. See <https://scipy.org>.

Python provides tools that may help on those rare occasions when you really
do want to know the exact value of a float. The
float.as_integer_ratio() method expresses the value of a float as a
fraction:

>>>

>>> x = 3.14159
>>> x.as_integer_ratio()
(3537115888337719, 1125899906842624)

Since the ratio is exact, it can be used to losslessly recreate the
original value:

>>>

>>> x == 3537115888337719 / 1125899906842624
True

The float.hex() method expresses a float in hexadecimal (base
16), again giving the exact value stored by your computer:

>>>

>>> x.hex()
'0x1.921f9f01b866ep+1'

This precise hexadecimal representation can be used to reconstruct
the float value exactly:

>>>

>>> x == float.fromhex('0x1.921f9f01b866ep+1')
True

Since the representation is exact, it is useful for reliably porting values
across different versions of Python (platform independence) and exchanging
data with other languages that support the same format (such as Java and C99).

Another helpful tool is the math.fsum() function which helps mitigate
loss-of-precision during summation. It tracks “lost digits” as values are
added onto a running total. That can make a difference in overall accuracy
so that the errors do not accumulate to the point where they affect the
final total:

>>>

>>> sum([0.1] * 10) == 1.0
False
>>> math.fsum([0.1] * 10) == 1.0
True

Web-Dev Notes

DRY is often misinterpreted as the necessity to never repeat the exact same thing twice. This is impractical and usually counterproductive, and can lead to forced abstractions, over-thought and over-engineered code.Harry Roberts

DRY, SRP, Modularity etc is not a ultimate goal or strict rule. It is just a principle and recommendation.

BEM

BEM colors:

  • .primary
  • .secondary
  • .tertiary
  • .light
  • .dark

BEM code sample:

<div class="c-card">
  <img class="c-card-img" src="http://via.placeholder.com/350x150">
  <div class="c-card-header">
    Featured
  </div>
  <div class="c-card-body">
    <h4 class="c-card-title">Card title</h4>
    <div class="c-card-text">Card additional content.</div>
    <a href="#" class="button button-primary">Link</a>
  </div>
  <div class="c-card-footer">
    Posted 12 days ago
  </div>
</div>

Folder structure:

  • components: reusable components. For example: c-pager
  • layout: layout styles, like header, footer, sidebar
  • pages: context specific styles. For example: contact-us
  • utilities: utility helper classes with single function. Often using !important to boost their specificity. For example: u-text-center

Links: