For those who are visually impaired, the narrative is similar because there often isn't any Braille or highly contrasted signage present, which makes navigating public spaces close to impossible. Individuals who are hard of hearing encounter their own challenges, particularly in settings where verbal communication is the primary mode of communication. Lack of visual cues or sign language support can cause announcements and important information to be missed in both subway stations and airports. However, the issue is much more widespread than ramps in our daily infrastructure. This inaccessibility is also deeply embedded in our digital spaces. For example:
Videos without captions or transcripts: Despite the widespread use and availability of multimedia content, transcripts and captions are often lacking. This prevents those who are hard of hearing from participating in an experience that could be informative or enjoyable.
Websites that are not compatible with screen readers: People who are visually impaired rely on screen readers to navigate the web. However, a lot of websites don't have the required semantic markup, which leaves this user group lost in a web of confusing nonsense.
Lack of keyboard accessibility: For those who are incapable of using a mouse, keyboard navigation is essential. Unfortunately, many digital platforms are not designed with keyboard-only users in mind, making them nearly useless for this group.
Poorly designed user interfaces: People with cognitive impairments often find it difficult to complete tasks that should be simple due to excessively complicated or unintuitive interfaces.
These are not just design flaws, they are glaring oversights. By ignoring these crucial aspects of web design, we're excluding a significant portion of our population from fully participating in the online world. This issue calls for mass awareness and action among governing bodies, architects, and developers.
Inaccessibility isn't always a result of oversight; much of the time, it stems from financial constraint. The cost of reinforcing structures with ramps, elevators, or Braille signage is prohibitive, especially for small businesses already running on thin margins. Even with the willingness to accommodate, these financial barriers will push back the prioritization of those features. Similarly, the time and investment needed to ensure websites and applications are fully accessible can be heavy, especially when the benefit is only to a minority of the user base.
Digital adaptations usually involve code adjustments and design improvements, which are less costly than the materials and labor needed for constructing physical accommodations. Therefore, making digital environments accessible is not only a financially sound decision but also extends the software’s utility to a broader audience, reinforcing its value even further. Let’s investigate some methods to accomplish this.
Having a well-organized website is ensuring that everyone can access your online space. This comes down to two main things: using meaningful HTML elements and structuring your headings properly.
Imagine walking into a building without any signs—you'd feel lost, right? A website lacking a proper heading hierarchy is like a maze without any hints for users with screen readers. To specify how your content is organized, it is imperative that you use HTML heading elements (h1, h2, h3, etc.) in a hierarchical fashion. Just by taking this easy step, you can help screen readers comprehend how your page is laid out and make it easier for users to find the information they need.
Now, think of semantic HTML elements as the building blocks of a universally understood language. These are the elements that explain their meaning to the browser and the developer, such as the header, footer, article, section, nav, and so on. But more importantly, they provide screen readers with information about the website's structure.
One of the most important strategies for making your website accessible is to use semantic HTML. Screen readers significantly depend on elements such as buttons, forms, links, and lists to convey particular kinds of content and to give context and meaning. Because of this, your website becomes more accessible the more semantically accurate your HTML is.
One technique for scrutinizing the structure of a webpage is to take advantage of built-in browser tools. In most modern web browsers like Chrome, you can activate the accessibility panel by pressing Ctrl + Shift + P and then typing "accessibility" into the search bar that appears. This panel will allow you to see a page from the perspective of assistive technologies like screen readers.
A tree structure and element details are also displayed in this panel, which illustrates how screen-reading software reads your webpage. This will allow you to determine whether an element is appropriately descriptive for someone using a screen reader or other device, as well as how it is narrated, and in what order it appears. Consider personally using a screen reader to test your website if you want to take it a step further. This feature is already included in the majority of operating systems. Navigating through your website with your eyes closed, relying only on the screen reader's voice and your keyboard's tabs and hotkeys, can provide you with that same user experience you're offering to your visually impaired visitors.
If you were using a screen reader and came across an image that can’t load, you would encounter a void in the content that prevents you from accessing it and disconnects you from the page's overall purpose. This only happens if the "alt" attribute isn’t present. This gap is filled by that attribute, which gives people who can't see the image context. When utilizing “alt”, the following are recommended practices:
Be descriptive: Provide a succinct and understandable description of the image in your "alt" text. If the image contains text, the "alt" attribute should include the same words.
Be relevant: Avoid loading the "alt" text with unnecessary details that detract from the image's main point.
Avoid redundancy: If the image is purely decorative and adds no additional information to the page, an empty "alt" attribute (alt="") is recommended to avoid redundancy for screen reader users. Don’t use text saying "image of..." or "picture of...".
It should be noted that context matters when using this attribute. For example, on an art site, you may want to provide more detail to convey the style, mood, and composition, as these components are needed to best understand and appreciate the artwork.
<div>
<img src="painting.jpg" alt="Oil painting by
Harald Sohlberg titled 'Winter Night in the
Mountains', depicting a vivid, snowy landscape
dominated by a large red house with a church
spire in the background, all set against a dark,
cloud-filled sky.">
</div>
For a more utilitarian site, it is best to keep the alt text short, as it mostly focuses on the function and relevance of the image to the content around it.
<div>
<img src="painting.jpg" alt="Snow-covered houses">
</div>
Users should be able to navigate through all interactive elements in an application by using the 'tab' key to move forward and 'shift + tab' to move backward for the best keyboard accessibility. The 'enter' or 'space' bar should be able to be used to activate interactive elements such as buttons or dropdowns. When dropdown menus are present, arrow keys ought to make it possible to navigate through the available options.
Always utilize the correct input controls, such as buttons, anchor tags, or form elements, for creating interactive elements. These are specified in HTML standards and supported by browser implementations to provide basic keyboard functionality right out of the box, eliminating the need for extra development work.
<!-- correct use of a button -->
<button onclick="alert('You clicked me!')">Click
Me!</button>
<!-- incorrect use of a div as a button -->
<div onclick="alert('You clicked me!')">Click
Me!</div>
<!-- correct use of an anchor with href, which
is focusable and can be activated with the
keyboard -->
<a href="#content">Skip to content</a>
<!-- incorrect use of a span as a link -->
<span onclick="location.href='#content'">
Skip to content</span>
For instance, when you use a button element, it is automatically focusable and can be activated with the ‘enter’ key by default. This is because browsers are built to recognize these elements as interactive and to provide them with certain accessible features. This is another advantage of using semantic HTML! It ensures accessibility for free. Therefore, it isn’t recommended to use non-interactive components like "divs" in place of interactive features. This is because different states like hover, active, and focus are naturally understood by semantic elements. The focus state should always be clearly visible, indicated by a focus ring or outline to signal the active element.
Events that rely solely on mouse interaction should especially be avoided. Instead, make sure that all events can also be triggered via the keyboard, providing equal functionality for all users.
The spectrum of visual impairments stretches from total blindness to partial vision loss. Luckily, this demographic can easily access digital content thanks to already established standards. A crucial aspect of these standards is the color contrast ratio, which defines the difference in luminance between the text (or a focal element) and its background, ensuring that important content is always distinguishable and readable.
To guarantee that text is easily readable, the Web Content Accessibility Guidelines (WCAG) offer recommendations for color contrast ratios. For instance, larger text should have a minimum contrast ratio of 3:1, whereas standard text should have a minimum ratio of 4.5:1. By clicking on a text element's hex color code, you can check the text-to-background contrast ratio right in your inspector.
Red-green color blindness is one of the most common kinds, so try to keep that in mind when selecting colors for your UI. I use the browser extension "Let'sgetcolorblind" to check if designs remain clear for people with different color perception levels. With this tool, you can examine your designs in various color blindness scenarios and make sure that the context of certain elements is still understandable in any hue or tone.
When color blindness could compromise clarity, incorporating icons and additional contextual elements may be helpful as well.
Even though our everyday environments still have significant structural flaws that limit people with disabilities, as developers, we can take steps to remove the barriers that they face online. The need to integrate this accessibility into both our digital and physical spaces is not necessarily about compliance or ticking off boxes on a checklist—it's about upholding the dignity and autonomy of every individual and striving towards a world where people can live freely and independently to the best of their ability. When we forget to consider the wide spectrum of human abilities, we unknowingly contribute to an environment of exclusion. This is one that not only restricts the individual, but also degrades society by stifling the contribution and potential of all its members.
]]>However, the state of AI in the real world is much different. While it is difficult to predict which Hollywood film will accurately depict the future, many individuals are worried about the impact of AI on the job market. Factory workers, delivery drivers, and software developers alike are concerned because of AI’s potential to automate portions of their work. In fact, an Evans Data Corporation survey found that nearly one third of developers surveyed are in fear of becoming obsolete due to AI. But is such a significant amount of fear rational? In this article, the code that large AI platforms like OpenAI can produce will be compared to clean code (production code) written by professional developers.
“Malevolent robot stories used to be more about brawn than brain — so it was a genuine shock for audiences in 1968 when the sentient HAL-9000 computer calmly said, ‘I'm sorry, Dave, I'm afraid I can't do that.’ Above, Gary Lockwood and Keir Dullea in 2001: A Space Odyssey. Metro-Goldwyn-Mayer/Getty Images.”
AI has come a long way since the invention of the perceptron algorithm in 1957. It has acted as a basic building block for neural networks and was partially responsible for some recent advancements such as supervised and unsupervised machine learning. These have led to machines that can challenge pro chess and go players, write essays, create art or music, but more importantly to this article, produce code. Perhaps the most recognized and largest platform is OpenAI which has created ChatGPT. Anyone can log into ChatGPT and ask it to produce code based on the user’s description. At first glance the code it produces seems extraordinary; it is produced quickly and confidently with explanations of how the code works. However, ChatGPT is susceptible to creating poorly written code on its first attempt. The code below depicts ChatGPT’s first attempt to solve a common Euler problem for finding the nth prime number.
(defn prime? [n]
(and (<= n 1) false
(not-any? #(zero? (rem n %)) (range 2 (Math/sqrt (inc n))))))
(defn nth-prime [n]
(let [primes (filter prime? (range 2 (inc Integer/MAX_VALUE)))]
(nth primes (dec n))))
(defn -main []
(let [n 10001]
(let [result (nth-prime n)]
(println (str "The " n "th prime number is " result)))))
AI tools like this can feasibly be used for simple Euler problems or data structure questions, but even so, syntax errors may arise and the code may not work as expected. In this case, upon running ChatGPT’s output in a coding environment, the program takes a moment to process and then produces an IndexOutofBoundsException.
Based on clean code principles this code would be considered neither clean nor functional. The naming of the functions is acceptable, but the implementations are not. For instance, the prime?
function should refactor out a function to check if a number is divisible by zero. Likewise, the use of the filter
function would be better suited for the task, and it would appear more readable as well. Finally, the nth-prime function could be simplified by using (range)
in place of the Integer/MAX_VALUE
and the function produces the exception because it is not correctly utilizing the laziness that range can provide. There are certainly other issues with the code, but the emphasis in the next section will instead be code that I wrote for the same prompt.
(defn mod-equal-zero? [dividend divisor]
(zero? (mod dividend divisor)))
(defn prime-number? [n]
(if (= n 1)
false
(empty? (filter #(mod-equal-zero? n %) (rest (rest (range n)))))))
(defn nth-prime [n]
(nth (filter prime-number? (range)) n))
The code above runs quickly and produces the correct result. Additionally, because there is a separate function for checking if a number is divisible by another, the code is easier to follow and utilizes Clojure-specific tools to produce a correct result. It is crucial to note that the code was created using test driven development (TDD), a powerful tool that Clean Coders developers use, and AI tools are not yet capable of implementing. TDD could be considered a controversial topic amongst the world's developers, but it is generally effective at reducing errors and bugs within code. This is a prime example of something that human developers are capable of that AI is not. While a user could provide tests for ChatGPT to try and pass, based on the way AI functions, it is not capable of strictly following TDD. In the screenshot below, ChatGPT was asked about its TDD capabilities based on how it produces code.
This one example does not definitively prove that AI will be unable to replace human developers, but it does suggest that there may be an opportunity for coexistence instead. One reason is that ChatGPT’s greatest strength was its ability to fix its mistakes through human intervention. The code below depicts ChatGPT’s second attempt after being given feedback.
(defn prime? [n]
(if (<= n 1)
false
(not-any? (fn [i] (zero? (rem n i))) (range 2 (int (Math/sqrt n) + 1)))))
(defn nth-prime [n]
(loop [count 0
num 2]
(if (= count n)
(dec num)
(if (prime? num)
(recur (inc count) (inc num))
(recur count (inc num))))))
(defn -main []
(let [n 10001] ; Change this to the desired n
(let [result (nth-prime n)]
(println (str "The " n "th prime number is " result)))))
ChatGPT did revise its initial attempt, but in the prime?
function it attempts to add the integer of the square root of n with 1. This is a valid approach but is not how numbers are added in Clojure. The syntax would be
(+ (int Math/sqrt n) 1).
or better yet:
(inc (int Math/sqrt n))
After fixing this, the code does work, but it lacks many of the fundamentals of clean code. Functionality wise, the use of the loop function works fine here, but loops are generally considered to be unidiomatic and excess code that could be replaced using the filter function as demonstrated by my own code in a previous screenshot. Finally, as mentioned earlier, it still does not refactor unless specifically told to. In the code below, ChatGPT was prompted again to change its implementation.
(defn prime? [n]
(and (<= n 1) false
(not-any? #(zero? (rem n %)) (range 2 (Math/sqrt (inc n))))))
(def primes (filter prime? (cons 2 (mapcat #(list %1 %2) (range 3 (dec Integer/MAX_VALUE) 2) (repeat 3 5)))))
(defn nth-prime [n]
(nth primes (dec n)))
(defn -main []
(let [n 10001] ; Change this to the desired n
(let [result (nth-prime n)]
(println (str "The " n "th prime number is " result)))))
While the implementation was changed, the code no longer worked and required additional time for the prompter to explain necessary revisions. The changed code is below.
(defn divisible? [n divisor]
(zero? (rem n divisor)))
(defn prime? [n]
(and (<= n 1) false
(not-any? #(divisible? n %) (range 2 (Math/sqrt (inc n)))))
(def primes (filter prime? (cons 2 (mapcat #(list %1 %2) (range 3 (dec Integer/MAX_VALUE) 2) (repeat 3 5)))))
(defn nth-prime [n]
(nth primes (dec n))
Finally, after three more attempts I was able to get ChatGpt to produce eligible code that works and is mostly idiomatic. However, it had forgotten that I wanted it to refactor out a function that checks for a number being divisible by zero, so I asked it one more time.
(defn divisible? [n i]
(zero? (rem n i))
(defn prime? [n]
(if (<= n 1)
false
(not-any? #(divisible? n %) (range 2 (int (Math/sqrt n) + 1)))))
(def primes (filter prime? (iterate inc 2)))
(defn nth-prime [n]
(nth primes (dec n))
(defn -main []
(let [n 10001] ; Change this to the desired n
(let [result (nth-prime n)]
(println (str "The " n "th prime number is " result)))))
Even after the above screenshot of a fourth attempt, an error reoccurred in the prime?
function by trying to add 1 to the square root of n. After I intervened and resolved this, the code was effective and efficient.
The evidence is mounting that in order to use this tool to write code, the prompter must already know how to code so that they can revise the output. As demonstrated above, it may take multiple attempts to get the AI to write clean code that works for something as simple as finding a prime number. A novice in the field of software would have a hard time finding what is wrong with the code and then telling it what is needed to change to make it function as intended. If this is the case for a simple prompt, it is likely that far more would be necessary to get ChatGPT to produce a larger portion of code such as a game or website. The evident conclusion is that it would be time consuming and frustrating for a developer to strictly use ChatGPT to create a complex project, but nearly impossible for someone who is not a developer.
Before answering the initial question proposed at the beginning of this article, it is important to understand that tools like ChatGPT are amazing tools that have and will continue to help people of all walks of life. The point of this article is not to advise against the use of AI, but instead to demonstrate why it should not be feared by software developers or other professionals. The examples in this article demonstrate that current AI tools are in no state to outright replace developers and should instead be perceived as another tool in the developer’s arsenal. In conclusion, this work reveals that in order to get clean maintainable code from ChatGPT, the user of the tool needs to be well-versed in software development. Even in an optimal situation frustration and headaches arise before anything useful comes out. Therefore, businesses will continue to get better value from hiring real developers that follow clean coding principles and can make their vision become reality through descriptive iteration planning meetings(IPMs) and user stories than they would from attempting to use AI.
The illustration for this article was drawn by Clean Coders team member Nick Meccia, using the following image from openart.ai as a reference:
]]>These tendencies were surely my responsibility to manage, however, they were not discouraged by the languages I was using (if anything, they were encouraged). And this is why I find functional programming so compelling: indeed, it is not some sort of formula for writing perfect code; it is a discipline. This being the case, I find that languages that embrace the paradigm—the discipline—wholesale make it harder for me to write bad code.
There is no silver bullet for writing perfect code, though; there will always be tradeoffs when operating in a new paradigm. For truly ‘functional’ code, these have certain implications for how we interact with persistent data stores, and they may seem initially outlandish. I’d, here, like to take a moment to dig into the source of these tradeoffs and show you how my favorite implementation of these principles—Datomic—handles these.
To begin, let’s take a moment to understand why one might want to use functional programming in the first place. What qualities does this paradigm have that make it so hard to write bad code? The foremost of these is the notion that our functions should be pure, meaning that they will always return the same value for the same input. This maximizes the predictability (and testability) of our programs. As a side effect of this, our functions will (or ought to) be avoidant of changing state outside their scope, i.e., causing side effects. If the state exterior to a function is not changing, then it follows that when we declare values they generally don't change; they are immutable—another principle of functional programming.
But surely being purely avoidant of unpredictability (side effects) goes against the very nature of a useful program in the first place... right? To be made use of, programs have to be able to consume and react to user input, the only reliable part of which is being unreliable. They also often need to persist data based on this input: after all, I wouldn't like it very much if my bank's software kept my account balance truly immutable regardless of the money I put in.
Indeed, there are instances such as writing output that necessarily involve causing side effects (e.g. writing to stdout), but obeying the principles of functional programming means that we avoid these instances whenever possible.
The question then becomes whether our persistent data store is such a place where these functional principles can be obeyed. At first blush, the answer seems to be an obvious "no." After all, what is a database but a method of recording the current state of different data structures as they change during the runtime of a program? Whether your database is full of rows, or documents, or graph nodes, the purpose of them living there is to be modified and recalled, isn't it?
Indeed, it seems unavoidable to "break the rules" by allowing mutability in our data store, but perhaps we can mitigate it. When state changes it does so in one of three ways: creation, deletion, or modification. In each of these, state begins with some value (nil in the case of creation), and is changed to be something else (nil in the case of deletion). What if instead of only storing the current value of some state, we simply stored the fact that one of these changes happened? In this way, every change in state becomes a "creation," the creation of a record storing the newest value of the state.
Doing so not only effectively eliminates two of the three ways state can change, but also allows us to obey the principle of functional purity. Since old values are not being deleted, I always get the same thing when I ask the database for a particular thing at a particular time. Our database is acting like a pure function of our query.
This inclusion of time as an intrinsic characteristic of all matters of state is a fundamental precept of the Clojure database, Datomic, developed by Rich Hickey. In traditional database design, we think of our database as holding things: rows, columns, documents, graph nodes. Datomic does not store things. Datomic stores facts.
Facts—from the past participle of the Latin facere, literally "to have done"—implicitly have time associated with them. If I say, "The frog is on the log," I am stating a fact that at this moment said frog is sitting upon said log. If I say, "The frog hopped off the log," I am stating two facts, namely, that the frog was on the log at some point in time, and that at a later point in time, he was no longer on the log. If I wanted to determine the current state of the frog being on the log, I could find the most recent fact about the frog's being on the log and read it. Likewise, I could still find all the points in time at which he hopped onto and off of the log.
Datom:
an immutable atomic fact that represents the addition or retraction of a relation between an entity, an attribute, a value, and a transaction.
Such facts are stored as what is referred to as datoms. These data structures are so-called "atomic" because they have to do with a single attribute of a single entity at a single point in time.
Datoms consist of
Entities, so-called, are as close to actual things as datomic gets and essentially consist of all the current datoms that are associated with the same id. It is like defining a thing as a list of all the things you can predicate of it.
If I wanted to define an entity, the sun, as "a huge ball of flaming gas" I might represent it as a Datomic entity with id = 1
like so:
E | A | V | T |
---|---|---|---|
1 | :size | :huge | 123 |
1 | :material | :flaming-gas | 123 |
It is simply the coming together of all things we wish to predicate of it.
You may notice in the above example the T
field on the datom, the transaction. Transactions are simply additions of datoms to the Datomic database. We can transact singular datoms at a time, or group them together. In the above example, the datoms represented by each row in the table were transacted together since they have the same transaction id.
The list of all transactions is how Datomic keeps track of the history of the database. Every datom that's ever been transacted lives here in order. When we want to get the "current state" of an entity, Datomic will give us the most recent set of datoms associated with it, but the old ones still live on in the history. As mentioned before, this is how Datomic acts as a sort of "pure function," since it will always give you the same value for a datom at a given point in time.
As with all decisions in software development, there are tradeoffs to this approach. From what I can tell, the greatest source of these tradeoffs lies in the fact that Datomic simply refuses to forget things. This is fantastically useful when you need to implement an audit trail (it's already done for you) or a user runs into an issue (you can step through every change to the user's state around the time of the incident). But, admittedly, you're storing all of that historical data whether you're using it or not.
On the one hand, disk space is cheap these days, but this fact only goes so far. Truly forgetting data is a requirement of many programs either for legal reasons (scrubbing PII from the database) or to conserve disk space.
The latter of these concerns can be mitigated via the :db/noHistory
flag that can be attached to datoms. Doing so will ensure that only the most recent version of an attribute is stored, and causes datomic to behave more like a traditional database for attributes that have it enabled.
For the former concern, data that legally must be forgotten can be removed via an asynchronous feature of Datomic referred to as 'excision.' This process produces an index of the database that does not include datoms meeting a particular criterion (e.g., being associated with a particular entity). Querying with respect to this newly indexed version of the database will not include those excised entities, though the process is a function of the size of the entire database and can therefore be totally impractical for anything larger than a few gigabytes.
There are times, however, when neither of these techniques is sufficient. If space is an issue, and the data is already in the database, it's too late to implement the :db/noHistory
technique, and the database is probably too big to excise. In this case, it may be necessary to resort to more extreme measures.
'Decanting' is a technique wherein the transaction history of a datomic database is read in order, filtering out undesirable transactions, and sending the datoms within to a new database. This technique is very extreme and requires a "steady hand," so to speak, as datomic's partitioning system will not allow user-defined entity ids. As a result, you will have to track the ids that are generated by the new database and retroactively assign them to their references to maintain relationships between entities.
This can be extremely burdensome if there are other databases that reference these ids, as a migration will be required to update them to the new value—potentially taking down two applications while the process is running.
Even more extreme is the 'snapshotting' technique. In some cases, undesirable datoms were transacted along with those which needed to be migrated to the new database. In this case, you may need to migrate on a per-datom or per-entity basis, which suffers from all the same pitfalls as decanting, but loses history in the process.
These are extreme cases, and the vast majority of Datomic instances will not run into problems requiring their implementation, but they are prime examples of what happens when we take the principles of functional programming to their logical conclusions. Whether these tradeoffs are manageable will, of course, depend on your context. For myself, I continue to use (and enjoy) Datomic for its compliance with the functional way of programming, as well as its simplicity and convenience when it comes to how its data is organized and its interactivity with Clojure. When it comes to an environment that forces me to write code that I am proud of and remains maintainable, the Clojure/Datomic combination is unparalleled in my experience.
]]>Animation is a great way to make a dynamic, engaging website and provides users with a delightful and fun experience. With animation, you can increase visual hierarchy, provide visual feedback to a user action, show relationships, and confirm transitions between elements. There are several ways to make an animation for a website. In this post, I will show how to make an animation with CSS.
Before we start to explore the magic world of CSS animation, I would like to remind you about the transform property. This is an essential building block of our knowledge citadel.
The transform CSS properties make it possible to translate
, rotate
, scale
, and skew
any HTML element.
The translate function moves an element up, down, right, and left.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
transform: translateX(100%) translateY(100%);
}
</style>
The rotate function rotates an element clockwise or anticlockwise. You can specify the rotation value in degrees, turns, or radians. The other elements on a page will not be affected by this transformation.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
padding-top: 150px;
padding-left: 150px;
}
.child {
background-color: gold;
height:200px;
width: 200px;
}
.parent:hover .child {
transform: rotate(45deg);
}
</style>
The scale function changes the size of an element. The scale() method affects both the X and Y axes. Scale(X) increases or decreases the width of an element. Scale(Y) increases or decreases the height of an element. By default, the scale function starts at the center. You can set the origin of transformation by adding value to transform-origin
.
<style>
.parent { background-color: gray;
height: 400px;
width: 400px;
padding-top: 100px;
padding-left: 100px }
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
transform: scale(1.5);
}
</style>
The skew function tilts an element in the X or Y direction. The angle can be set in one of several units.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
padding-top: 100px;
padding-left: 100px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
transform: skew(15deg, 15deg);
}
</style>
You can create a simple animation with transition properties. A transition is a movement from one CSS style to another. Usually, this type of animation works on some trigger, like a hover state or a specific class. A transition should be applied to a base element, not to an element where the transition occurs.
Let’s look at an example which shows golden ball on a gray background.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
</style>
<body>
<div class="parent">
<div class="child"></div>
</div>
</body>
Now I want the golden ball to move from left to right. For this purpose, I will add transform
properties
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
transform: translateX(100%)
}
</style>
This animation does not look natural or realistic at all. The ball is jumping from one side to the other. To make it move smoothly from left to right, let’s add some properties.
First, let’s specify the length of the transition by adding transition-duration
properties. transition-duration
shows the time an animation should take to complete. The duration can be in seconds or milliseconds. If the time is not determined, a transition will not work because the default status is 0s.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
transition-duration: 1s;
}
.parent:hover .child {
transform: translateX(100%);
background-color: aquamarine;
}
</style>
By default, a transition is applied to all properties. You can specify which properties will be affected by adding transition-properties.
They can be one or several things.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
transition-duration: 1s;
transition-property: transform;
}
.parent:hover .child {
transform: translateX(100%);
background-color: aquamarine;
}
</style>
Next, I would like to specify a timing function. transition-timing-function
defines the phase of the transition effect. There are several built-in easing functions: ease
(default), liner
, ease-in
, ease-out
, and ease-in-out
.
You also can customize a timing function with the cubic-bezier function. It is based on four points: the first (0;0) and the last (1;1) are constants, but you need to define two more. To complete this task, you can use online tools like https://www.cssportal.com/css-cubic-bezier-generator/, https://easings.co/, https://10015.io/tools/css-cubic-bezier-generator.
For the second method, open a browser and inspect the element. Find the line with transition-timing-function
and click on the icon with a curve.
Make adjustments, copy the function, and paste into the CSS.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
transition-property: transform, background-color;
transition-duration: 1s;
transition-timing-function: ease-in-out;
}
.parent:hover .child {
transform: translateX(100%);
background-color: aquamarine;
}
</style>
Last but not least is transition-delay
. transition-delay
specifies when the transition will start. For this property, a negative value is available: an animation will look like it is already playing for an established time. Let’s add transition-delay
of 200ms. After a mouse hover over the parent element, a transition will start with a 200ms delay.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
transition-property: transform, background-color;
transition-duration: 1000ms;
transition-timing-function: ease-in-out;
transition-delay: 200ms;
}
.parent:hover .child {
transform: translateX(100%);
background-color: aquamarine;
}
</style>
For clean CSS, you can shorten these properties to one line. The order of functions is not essential except for duration and delay. The browser reads the first value as transition-duration
and the second as transition-delay
.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
transition: transform, background-color, 1000ms ease-in-out 200ms;
}
.parent:hover .child {
transform: translateX(100%);
background-color: aquamarine;
}
</style>
For more complex animations, CSS has the @keyframe] at-rule.
Keyframes make it possible to create complex and flexible animations. The keyframes contain all information about how an animation will look from start to finish.
First, let’s specify properties for an animation. Animation properties should be in a place where you want them to happen. In this example, I want the golden ball (child) to move when I hover over the grey background (parent). I will set animation-name
and animation-duration
to .parent:hover .child.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
animation-name: the-best-animation;
animation-duration: 2s;
}
</style>
To define the appearance of this animation, let’s establish the @keyframe at-rule. This @keyframe at-rule should be outside of any selectors. Don’t forget to specify a name for the animation.
There are two ways to define animation sequences: the first is to use a from… to declaration, and the second is to use percentage in time.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
animation-name: the-best-animation;
animation-duration: 2s;
}
@keyframes the-best-animation {
100% {
transform: translateX(100%);
background-color: aquamarine;
}
}
</style>
At the end of the animation, the circle resets to its initial state. With animation-fill-mode
it is possible to change this behavior. animation-fill-mode
specifies styles for elements before an animation starts and after it ends. The available values are none
, forwards
, backwards
, both
, initial
, and inherit
.
Now I want the golden ball to stay in a final position after an animation is finished. Let’s apply the value forwards to animation-fill-mode
.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
animation-name: the-best-animation;
animation-duration: 2s;
animation-fill-mode: forwards;
}
@keyframes the-best-animation {
100% {
transform: translateX(100%);
background-color: aquamarine;
}
}
</style>
To extend this animation and make the golden ball’s behavior more complex, I want to add more steps to the @keyframe at-rule. Now the ball changes position, color, and size.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
animation-name: the-best-animation;
animation-duration: 2s;
animation-fill-mode: forwards;
}
@keyframes the-best-animation {
0% {
transform: translateX(0);
background-color: aquamarine;
}
35% {
transform: translateX(100%) scale(0.5);
background-color: pink;
}
65% {
transform: translateX(100%) translateY(100%) scale(0.5);
background-color: violet;
}
100% {
transform: translateY(100%);
background-color: aquamarine;
}
}
</style>
You can also specify how many times an animation will repeat by adding values to animation-iteration-count.
The value can be set by number or by global values. For endless animation, use infinite
keyword as a value.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
animation-name: the-best-animation;
animation-duration: 2s;
animation-fill-mode: forwards;
animation-iteration-count: 2;
}
@keyframes the-best-animation {
0% {
transform: translateX(0);
background-color: aquamarine;
}
35% {
transform: translateX(100%) scale(0.5);
background-color: pink;
}
65% {
transform: translateX(100%) translateY(100%) scale(0.5);
background-color: violet;
}
100% {
transform: translateY(100%);
background-color: aquamarine;
}
}
</style>
With animation-direction
it is possible to specify how an animation should be played: forward
, backward
, reverse,
alternate
, or alternate-reverse
. The delay property is also available for animations. These properties also work with CSS transitions.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
}
.parent:hover .child {
animation-name: the-best-animation;
animation-duration: 2s;
animation-fill-mode: forwards;
animation-iteration-count: 2;
animation-direction: alternate-reverse;
}
@keyframes the-best-animation {
0% {
transform: translateX(0);
background-color: aquamarine;
}
35% {
transform: translateX(100%) scale(0.5);
background-color: pink;
}
65% {
transform: translateX(100%) translateY(100%) scale(0.5);
background-color: violet;
}
100% {
transform: translateY(100%);
background-color: aquamarine;
}
}
</style>
Now I want to make the animation work endlessly. I will shorten properties by writing them in one line. The order of functions is not essential except for duration and delay. The browser reads the first value as animation-duration
and the second as animation-delay
. Then I will replace animation properties for the child element. These manipulations will make the animation play forever without any trigger.
<style>
.parent {
background-color: gray;
height: 400px;
width: 400px;
}
.child {
background-color: gold;
height: 50%;
width: 50%;
border-radius: 50%;
animation: the-best-animation 2s forwards infinite alternate-reverse;
}
@keyframes the-best-animation {
0% {
transform: translateX(0);
background-color: aquamarine;
}
35% {
transform: translateX(100%) scale(0.5);
background-color: pink;
}
65% {
transform: translateX(100%) translateY(100%) scale(0.5);
background-color: violet;
}
100% {
transform: translateY(100%);
background-color: aquamarine;
}
}
</style>
This article provides all the information you need to start creating CSS animations. You can see how easy it is to make magic by manipulating space, color, shape, and time. You can animate text, buttons, dropdown menus, logo, and other elements on a page. One warning: have a clear reason to add animation to your design. One common example is to provide visual feedback on user actions. Remember that the best way to master something is constant practice. Experiment, look for best practices, and become a master of animations.
]]>Too often, I hear the woes of a fallen company: the cries of a thousand founders burnt by the all-consuming fire that is messy code. These digital disasters could have been avoided, had founders been taught a few simple things to look for during the process of choosing their software development team.
Software is meant to be soft, which is another way of saying easily changeable. Its malleability lends itself to adaptation, iteration, and change. This soft nature of code is uniquely powerful and essential to innovation and therefore must be preserved at all costs. But messy code is ruthless in its hunt for fresh code to devour. Problems arise when we drop our guard, forgetting Its appetite is never satisfied, allowing our precious code to be destroyed.
Messy code is like a messy room; it starts by saying we’ll clean it up tomorrow. However, tomorrow comes and it’s even messier than it was before. We push it off until the weekend, but when the weekend comes and… sniff sniff... we start to smell something funny. The mess seems to be growing organisms of unknown origin which now feed on the decaying matter once known as our room. In code, this mess can be identified by the following characteristics:
A single module of code does too many different things. This makes it difficult to reuse in other parts of the code without accepting the baggage, or spending a large amount of time parsing out the desired function from the module.
Making a change to one module requires subsequent changes in many other modules due to unknown dependencies buried in the code.
Making a change to one module breaks other modules with no conceptual relationship to the code that is being updated.
When read, the code is not easily understandable and not clear in its function.
(Unclebob discusses these code smells and more in his PPP Book.)
All these and more contribute to rotting code, rendering it uncontrollable, unpredictable, inestimable and expensive…. Dun dun duuuuunnnnnn.
Just like with the messy room, we made the mess because we were in a rush. We didn’t have time to put the dirty laundry in the hamper, or take the trash out. Obviously, making a mess allows us to go faster, right?
Wrong! As the mess piles up, our ability to navigate the space slows down until we struggle to even open the door past the clutter that has collected beyond.
This is what’s called “The productivity trap”. We think we can go fast and be efficient by making a mess, but in truth, we can’t, not in the short-term and certainly not in the long-term.
Messy code slows down productivity over time, making it more expensive to make changes or additions to the code in the future. You might say “We'll just make them work harder!” However, your development team is already working hard, very hard, they are just not working clean. In this environment the harder they work, the slower they go. You might say “We can add more developers! That will help us go faster!” Wrong again! When adding new team members, there’s always a dip in productivity while we help them catch up. However, once they finally catch up and learn how to make a mess like the rest of the team, it only gets worse as they start to contribute their own mess to the code. As you might imagine, the result is that productivity drops even more.
There will always be “Bugs” in the code. Bugs are unexpected behavior that may break the system. Some bugs are easy to squash, while others hide away in the deepest darkest parts of this messy room we call our code base. If our room is clean, then it’s relatively easy to find that bug crawling around on the open floor. It sticks out like a sore thumb! But pile mounds of clothes, food, and trash on top and that bug will never be found; in fact, it’s probably breeding.
So how can we know our development team isn’t making a mess? After all, us business owners don’t read code, all we can see is the tip of the iceberg as it’s rendered to us by our development team. So before you select your devs, make sure to ask the following questions:
Are you a freelancer, or part of a dev team?
There are two main reasons to hire a development team instead of a freelance developer.
Teams are familiar with writing code for others to read. This exchange of code is important because organization is necessary if the team is to communicate well to one another.
An individual developer may become ill, or move, or decide they don’t want to work on your project anymore. This means you’ll have to bring in someone new who will probably take one look at your code and want to run for the hills. Teams, or development companies, are more resilient, more permanent, and more dependable.
Do you practice Agile?
Two ships set sail for the mainland. The captain of the first ship holds true to his course no matter what: “A straight line is the fastest route!” The second ship isn’t so convinced. This captain is constantly reading the ocean to determine how she will adjust her course. Yes, you may say that her course is less straight, and perhaps it’s even a longer route than the first ship will take. However, you’ll find that the first captain ends up shipwrecked on an island or devoured by a storm for his lack of adjustment. The second captain will have made it to her destination, as she was able to go around the storms and islands in order to safely arrive at the mainland.
The moral of this story is to “commit to the mission, not the course.” We must be constantly evaluating our course, making many decisions along the way to be sure it is still the best route. Agile gives us constant feedback, allowing us to make course adjustments along the way to completing our project. This iterative process leads to innovation and a greater success rate.
Do you practice TDD (Test Driven Development)?
A test is like proof. If a code doesn’t have tests, then there is no proof it works. TDD takes this to the next level, requiring those tests before the code is even written. If your developers are not practicing TDD, then they are not proving the code functions in the way it was designed.
Do you have experience building large apps from the ground up?
DNS, Server admin, SSL, HTTP, Cloud Infrastructure, Database, Deployment, Security, Scaling, Lions, Tiger and Bears, OH MY!
These are just a few of the scary beasts that need to be tamed in order to build a new app. Simply knowing how to code doesn’t cut it. Inexperienced developers haven’t dealt with these animals yet, and even years of experience doesn’t promise knowhow because their experience could have been in a specialized role (like dog grooming), not bear wrestling.
You’re looking for true explorers who’ve braved the wild, and bent it to their will. Make sure to ask your devs if they’ve handled these beasts before… especially the bears.
From Baggage to Beasts, these are the things to look for when choosing the right dev team in order to keep your code clean and smelling fresh. You can learn more in our video series on Clean Code. If you have a particularly smelly situation, or would like to get started on the right foot; then reach out to our Clean Coders Studio and we’ll help you tackle the most daring software feats.
Stay clean my friends,
]]>Paradigm Shift
a fundamental change in approach or underlying assumptions.
Having only ever coded under an Object-Oriented paradigm, there were certain assumptions that I tried to bring along as I journeyed into the Functional world. Classes, objects, inheritance–things I thought I had to have actually turned out to be superfluous to the main idea of the code I was writing.
In many ways, learning functional programming felt like learning to code all over again. What follows are some differences and similarities I've found between the Functional and Object-Oriented paradigms.
Typically, in an OO language, you would have a class with accessor and mutator methods. The accessors would just perform read-only operations on the object, but the mutators would change the state of the object.
In this first example, we'll explore a basic Person implementation in an OO language: C#.
public class Person
{
private List<Item> Inventory { get; set; }
public Person() => Inventory = new List<Item>();
public void Equip(Item item) => Inventory.Add(item);
public double Wealth() => Inventory.Select(i => i.Value).Sum();
}
In this example, Add
changes the state of the Person
without regard for
anything that depends on it outside the scope of the method. Within the OO paradigm,
this sort of coupling between data and functions is desired. An object should be
managed by its member methods and the rest of the application should know nothing
of its implementation. But what does this look like in the Functional world?
Still stuck in my Object-Oriented mindset, I tried to implement this code in Clojure, but had no idea where to start.
State! Where do I store my member properties? How do I initialize a person?
My initial thought was to use global variables to store my state, which proved to be difficult to manage in my tests alone, and would break the moment a second person is created. I couldn't see the obvious solution to this problem until I saw someone else do it, and even then I still tried hanging onto my OO practices.
(ns person)
(defn wealth [items]
(reduce + (map :value items)))
That's it–just a single function that sums all the items.
But what about my person? I need an object to store all my data, right?
My pitfall was thinking that I needed objects, classes, and encapsulation,
when really all that was necessary was an array and a wealth
function to operate
on that array. The rest could be implemented using the core array functions.
(-> []
(conj sword)
(conj six-fingered-glove)
(conj florinese-dagger)
wealth)
Another thing you may have noticed here is how nothing outside the scope of the
wealth
function is affected. wealth
receives a value, items
, and returns a
value based on that input. It is completely unaware of the world around it.
Another key difference I've seen between the OOP and FP paradigms is how data is represented. In OOP, we generally try to keep our data within the scope of an object and operate only on that object.
public void Main()
{
var westley = new Person();
westley.Equip(mask);
westley.Equip(sword);
westley.Equip(iocanePowder);
Console.WriteLine($"Wealth: {westley.Wealth()}");
}
In FP, our data is somewhat exposed, freely flowing throughout the application.
However, that doesn't mean we don't have object-like data structures. If we needed to represent some person "objects", we would use some key-value data type like a hash-map!
(def people
[{:kind :person
:name "Fezzik"
:inventory [{:kind :item :name "Holocaust Cloak" :value 24}]
:strength 389
:charisma 4
:dexterity 2}
{:kind :person
:name "Inigo Montoya"
:inventory [{:kind :item :name "Six-Fingered Sword" :value 500}
{:kind :item :name "Brandy" :value 8}]
:strength 11
:charisma 21
:dexterity 73}
{:kind :person
:name "Miracle Max"
:inventory [{:kind :item :name "Miracle Pill" :value 65}
{:kind :item :name "Bellows" :value 32}]
:charisma 86
:strength 2
:dexterity 17}])
And if we absolutely need to have member functions, we could accomplish that by adding fields for them...
=> (def vizzini
{:kind :person
:name "Vizzini"
:intelligence 5000
:reassure #(println "Absolutely, totally, and in all other ways, inconceivable.")
:greet #(println "We are but poor circus performers.")})
=> ((:reassure vizzini))
"Absolutely, totally, and in all other ways, inconceivable."
=> ((:greet vizzini))
"We are but poor circus performers."
This is a bit ugly though... what we really want is to be
able to write something like (reassure vizzini)
or (greet vizzini)
.
Lucky for us, Clojure provides several ways of doing this!
Interfacing is another big concept in OO languages. These are used as a façade to perform the same kind of operation on objects of different types.
public interface ISpeakable
{
void Rename(string name);
void Reassure();
void Greet();
}
In FP, namely Clojure, we have a couple ways of achieving interface-like behavior: Protocols & Multimethods.
Protocols are sort of like Clojure's version of interface
.
Just like any other interface, it has a name and a set of function signatures.
(defprotocol ISpeakable
(rename [_ _] "Renames the speaker")
(reassure [_] "Makes a reassuring statement")
(greet [_] "Announces a polite greeting"))
Once the protocol is defined, it can be applied to any defrecord
implementation and still look like an ISpeakable
.
The use of defrecord
in Clojure is very similar to that of class
in
OO languages. While this data type leans away from the typical Functional
pattern, they can sometimes be the best tool for the job.
(defrecord Nemesis [name father]
ISpeakable
(rename [this name] (set! (.name this) name))
(reassure [_] (println (str name " reassures: I swear on the soul of my father, " father ", you will reach the top alive!")))
(greet [_] (println (str "Hello. My name is " name ". You killed my father. Prepare to die."))))
This code allows us to create an object in memory with :name
and :father
properties, as well as three member methods. When we instantiate and
use Nemesis
, we can see the state of the object being modified when
rename
is invoked.
(defn -main [& args]
(let [inigo (->Nemesis "Inigo" "Domingo Montoya")]
(reassure inigo)
(rename inigo "Inigo Montoya")
(greet inigo)))
=> "Inigo reassures: I swear on the soul of my father, Domingo Montoya, you will reach the top alive!"
=> "Hello. My name is Inigo Montoya. You killed my father. Prepare to die."
While interfaces dispatch a group of functions based on an object's type, multimethods dispatch single functions based on a predefined rule unique to each multimethod. With this, we can achieve interface-like behavior in a functional language.
(defmulti attack (fn [person _] (:combat-style person)))
(defmulti speak (fn [person _] (:condition person)))
(defmethod attack :brawler [person target]
; Fight as God intended: skill against skill
)
(defmethod attack :swordsman [person target]
; Counter Thibault with Agrippa
)
(defmethod speak :deaf [person message]
; Replace 'r' with 'w', 'th' with 'ff', ...
)
(defmethod speak :inflamed [person message]
; CAPITALIZE EVERYTHING
)
=> (speak archdean "Marriage is a dream within a dream.")
"Mawidge is a dweam wiffin a dweam."
=> (speak fezzik "I am the Dread Pirate Robers and there will be no survivors.")
"I AM THE DREAD PIRATE ROBERTS AND THERE WILL BE NO SURVIVORS."
Similarly to interfaces in OO, we can group together several multimethods
to encapsulate the finer details of certain behaviors, keeping our
higher-level code oblivious to the implementation. We can see here that
the invocations of speak
only know about the function interface and
does not need to know how the object being passed in affects the output.
There are a lot of areas where Functional and Object-Oriented programming contrast, but they are quite similar in other ways. The difference really lies in the way you think about and structure the software.
While you can absolutely do Object-Oriented programming in a Functional language, and vice-versa, it really helps to use a language designed for that paradigm. But if you like to go against the grain, then I encourage you to try using one paradigm in a language designed for another!
]]>UI Testing, a black hole of app software development. (Image by Event Horizon Telescope)
Let's get it out there. The biggest pain of any app development arises from the quest to automate the time-consuming manual effort of testing the features from the perspective of an actual customer.
When this initiative begins we quickly see the adoption of UI Testing; offered as the best, or perhaps the only solution. UI Testing frameworks for mobile (Appium, XCUITest, Espresso and more) are so prevalent, we fail to question their use.
These tools whilst technological marvels, are a trap; and their use flies in the face of basic software testing principles. The actual tool doesn't even really matter, the outcome is the same: very little in the way of useful improvement.
So stay tuned if you want to:
The problems with UI tests I've observed (and seen repeated on different projects over 15 years) is:
So how do smart teams, especially mobile app teams fall into the trap? The acuteness of this problem arises from the nature of releasing software via app stores.
Everyone's assumption is that nowadays all software is released in a continuous delivery fashion, where any fix is just a code-push away. Yet in reality, mobile app development looks more like the traditional software release process of yester year (when a golden master was literally sent to a factory to be replicated onto physical media). The app stores just replaced the factory, still pressing your app out one physical device at a time.
When software releases look like THIS you don't want to be on the team that ships bugs
Let's just say the delivery process is VERY different from that of continuous delivery of a backend service or web app. So the unique characteristics of app store distribution create a higher imperative for quality up front. This leads to increased cost and delays caused by the limitations of manual testing.
At some point or another, the team's fate is set; they become trapped on a path to UI automation into which plenty of energy goes in, yet nothing all that useful comes out.
We know all this! In software engineering we're all familiar with Mike Cohn and Lisa Crispin's Testing Pyramid and its cautions about the use of UI Testing.
This is where a team might be expected to return to this insight, pause for breath, heed the warnings, and say "we know where this leads, whatever WE DO.. however tempted, WE MUST resist the temptation of too many UI tests!".
Not so! Because the power of sunken losses and pre-existence of a set way of doing things is a powerful force. The existence of painstakingly-written, human-centric test scripts ensures that automation of those scripts is seen as a logical step.
(I'd bet money that the primary use of BDD tools is to automate existing manual test scripts rather than to write requirements.)
What's not given nearly enough consideration is that those tests were intended for a human being to follow. One that's vastly more intelligent than the computer about to be handed this complex task.
Creatures of automation's attempts to operate in a world that was designed for humans don't typically end well
What happens next is an emphatic embrace of any number of well marketed and popular User Interface testing tools which promise to make the manual testing a distant memory.
And now our problems really begin!
Soon we end up with an "Ice Cream Cone" approach to testing, despite the fact that it turns the testing pyramid on its head. This leaves us with a distinctly un-agile software delivery reliant on slow, error-prone, expensive tests; the very thing we were meant to replace!
Forcing computers into a human-centric world never ends well. The reason why UI Tests don't work and the whole approach is flawed is because it turns out they are actually End-To-End or "System Tests" involving literally every part of the system.
(Yes, you can run all this in a simulated environment to make it more predictable, faster. The effort to do so, however, is non-trivial meaning it probably doesn't happen. If it does, the complexity of the test system increases exponentially with intolerable increases in code, costs and maintenance.)
Here's the list of the reasons I've observed why these tests can fail (whenever I'm tempted to repeat the experience again):
Notice how many of these have nothing to do with the actual behaviour of the software itself but the endless combinations of different things outside that cause things to go wrong.
Manual testing software isn't a sustainable OR acceptable approach to software testing. But neither is UI Testing. So what is?
One answer is the relatively unknown discipline of "Acceptance Testing". This is something I discovered when grappling with these problems and learning about the value of TDD in solving related software quality problems.
(Given the lack of existing material, I created a new series exploring Acceptance Testing in detail with Clean Coders.)
The Key Takeaway: Acceptance Tests are the one kind of test that are designed for the explicit purpose of automating the testing of customer / business requirements. However, the critical difference between Acceptance Tests and UI Tests is that Acceptance Tests act at a level close to the actual code itself. They test the software at a level where the tests have direct access to the business logic of the software without the complexities of the UI and everything else this entails.
In short, Acceptance Tests are a way of returning to the guidance of Mike Cohn's Pyramid with particular focus on the middle of the Pyramid he labelled "Service Tests". These are the tests which Mike himself observed as being the forgotten layer of the test automation pyramid.
Here, I've suggested Acceptance Testing can help you escape from the UI test trap, but it turns out it also has many many other benefits to offer around the actual development of software in the first place - not just how you test it!
If you'd like to learn more about Acceptance Testing and how it can benefit your mobile (or other) app development project, check out my in-depth 5 Part Clean Coders series.
In it you'll discover the simple brilliance of Acceptance Testing and how to build successful software sooner and keep it that way for longer.
]]>When entering the field of software development, we fully understand that we must learn to write code. What isn't generally understood from the outset is that there are a myriad of other, less well-known topics and techniques that must also be mastered. One such very important aspect of software development is a Version Control System ('VCS').
As you may already know, a VCS allows developers to track changes in a group of files. While it might be fun and educational to build our own simple VCS from scratch we really don't need to. A smart guy named Linus has already created the de-facto tool of choice for software professionals today: git
The purpose of this article is to help new (<= 5 years professional experience) software developers author better git commits.
I will assume that you have at least tinkered with git and are familiar with, and have used, the following commands:
git init
, git status
, git diff
, git add
, git commit
, git push
, git pull
If not, take a look at the 'Pro Git' book. In particular, there are some nice starting chapters that should quickly bring you up to speed.
Ok, let's dive in.
Commits are the life-blood of a git repository. Each is like an entry in an ever-growing journal detailing a project's development history. So, let's talk about creating and working with commits.
We'll start with a simple principle:
Know exactly what you are committing (before doing so).
Before you git commit
, always git status
and git diff
first! Or, if you use a git GUI, actually look at the diff before clicking the button!
Worst-case scenario: Many a developer has mistakenly committed and pushed private credentials to an open-source remote. (Been there, done that...)
More likely scenario: You had added println
statements or other temporary debugging aids while coding which you meant to remove before committing. Avoid the unnecessary hassle and embarrassment of a git revert
by always looking at the diff before committing.
The video below offers a good example of using GUI to review changes before committing.
Now, git doesn't tell you how or when to commit, or how big commits should be, so I will: Commits should happen more frequently and should be smaller in size than you probably think. The more frequently they come, and the smaller they are, the better! We're talking about potentially dozens of commits per day!
If you're doing TDD you should be committing every time a new test or new suite of tests is passing. If you're doing TCR the commits will appear automatically and you'll actually have to squash them into larger, but still bite-sized, chunks. But what if you've been hacking for several hours and have a massive diff on your hands? (Happens to the best of us...)
Slice up large batches of un-staged/un-tracked changes into bite-sized commits.
You can use git add --patch
, or a GUI like Sublime Merge.
Why all this fuss about small commits? Well, the tiny changes contained in small, single-purpose commits can, if desired, be manipulated independently by amendment, reverting, re-ordering, or can even be squashed into larger commits--but once squashed, the previously listed actions become unavailable.
Dave Thomas and Andy Hunt, in their classic book The Pragmatic Programmer called version control 'a giant undo
button--a project-wide time machine that can return you to those halcyon days...when the code actually compiled and ran.'_
Still not convinced?
Martin Fowler echoed this sentiment in his book on refactoring in which he confesses: "I commit after each successful refactoring, so I can easily get back to a working state should I mess up later."
A commit after each successful refactoring--that's a lot of commits! If I've lost you at this point, let me reassure you that you don't have to push a stream of such tiny commits (more on that later). I'll also say that the so-called 'problem' of too many commits is a lot like the 'problem' of 'too many layers of indirection' in that it is usually not a problem.
Here's a demonstration of how I use Sublime Merge (my preferred git GUI) to go about staging a group of changes each in their own commit (that video is a bit long but the first minute is probably enough to get the gist):
Ok, now let's talk about composing good commit messages. Lots has been written about doing this well. My simplified guidance is this:
If, like Jessica, the manager in "O Foreman, Where art Thou?", you and your team members review each and every commit pushed by every other team member (and you really should!), you can begin to leverage git commit messages as a communications tool. You can provide deeper context for decisions and refer to articles or blogs that helped you in your coding. Commit messages can serve as conversation starters that advance the state of the art for your organization.
Before moving on, let's pause for a moment to recognize that the act of composing a commit message is similar in nature to naming a variable or describing a test case. Giving names to things doesn't require deep knowledge of mathematics or computer science but it can be deceptively difficult! Sometimes I have to take my hands off the keyboard, sit back, and just think for a moment or two before a good commit message appears in my mind. Sometimes I write something only to rewrite it seconds later. It's worth a few moments of your time to author a decent message.
Use commit messages to communicate!
Ok, so you've logged a few commits, all that's left is to push, right? Well, pushing is significant in that once commits are pushed, they become part of the history for anyone who might git pull
from the moment the changes are uploaded. If you alter the commit history in any way once it's been pushed that will require everyone else on the team to re-clone the repository!
So, all of that amending, re-ordering, and squashing we talked about earlier is off limits once the commits have been pushed. My recommendation is to push regularly and often (multiple times a day!), but not so often that you don't have a chance to consider a group of commits for squashing and re-ordering, etc...
As we have learned from all great time-travel movies: don't alter history!
(If you absolutely have to, you can alter the history by using
git push --force
... Let the reader beware!)
Authoring commits in git (or any other VCS) is an important part of your work as a software developer. What will you do to make your commits communicate more effectively? I'll bet you've had a few ideas come to mind as you've read this article. Implement them without delay! Your team will thank you. Developers who inherit your code in the future will thank you. Your future self will even thank you!
]]>Paul share's his own experiences with the adoption of Acceptance Testing and some things you can do to improve your chances of success including the importance of a SOLID software design and architecture when adopting Acceptance Tests.
We'll check in on a close cousin of Acceptance Testing: BDD, and see where the two approaches overlap, what problems they solve, and what the actual differences are between them to help you make better decisions about which is right for you.
We'll finish it up by considering the kinds of projects where Acceptance Testing is valuable and reflect on some famous examples of software that went wrong. Not because of bugs, but missing requirements!
You'll get a few last tips as to how to assess your own projects to see if they're likely to benefit from adopting Acceptance Testing. If you decide that Acceptance Tests might be right for your team, Paul proposes his own 3 laws of Acceptance Testing to guide your implementation.
And finally, catch up with the team as they put the finishing touches to their HP-35 Calculator and see it all come together (including those pesky UI Tests!!)
]]>We look at the tricky question of when exactly these 'detailed' requirements should get written on a project, how detailed they should be and where they belong in the process and WHO exactly should be writing them.
We'll consider the impact this 'additional' work might have on our team's velocity and how it actually helps planning and the accuracy of our estimates bringing greater predictability through the 'shifting left' of the QA process.
Enjoy the chance to be a fly-on-the-wall, watching along as our intrepid team wrestle with the process of writing up-front new requirements specific enough and through example, for the development team to carry on the work of developing their HP-35 calculator.
]]>