Category: Software Development

Demystifying CI/CD, and a simple economic concept that enables Continuous Delivery

We often hear the term CI/CD getting tossed around in conversations, particularly in the software world. It has become such a buzzword, like AI and Machine Learning, that our mind becomes saturated. As a result, we unconsciously (or consciously) don’t even bother elaborating the term.

Continuous Integration

Continuous Integration helps reduce risks

By reducing the size of merge conflicts

The job of a software developer, at a very literal level that doesn’t account for all of its sophistication, is to modify the source code of an application so that it produces new behaviors. The source code is a dozen of text files written in one or more programming languages. The source code goes through a magical process called building that eventually turns the text files into an application.

An application has many features. Usually, a developer is responsible for one feature at a time. To implement the new feature, the developer modifies the source code on his local machine. However, a team usually has more than one developer, so the natural question is how would the features developed locally get incorporated together ?

Like most things nowadays, the answer is on the cloud. There are many cloud solutions that enable collaboration between developers by providing a central place to store the source code. An example is Git/GitHub. When a developer is finished with a feature, he or she pushes the source code into GitHub. The feature then become immediately available to other developers who need to use that feature to develop their own features.

However, there is a problem. If the source code is just a bunch of text files, and developing a feature amounts to modifying the source code, then it leads to the possibility that two developers working on two separate features may end up modifying the same line of the same text file. This is called a merge-conflict, and it happens a lot. Merge-conflicts are a natural problem arising from collaborations between developers.

When a merge-conflict happens, a developer has to manually resolve the conflict by making a decision: to keep the changes from both features, to discard one or the other, or to discard both. Either way, this has direct impact on the application, because the source code on the GitHub is the source code that automatically is used build the application. A wrong decision may result in bugs or undesirable effects. Therefore this merge-conflict thing is risky.

Herein the first benefit of continuous integration: Continuous Integration helps reduce risks by reducing the size of merge conflicts. The opposite is not integrating the code into a shared place frequently. In other words, it is integrating the code of a feature only after the feature has been done. This leads to big merge conflicts, which are more risky than small merge conflicts.

By preventing programming errors from going downstream

The other part of continuous integration is verifying the source code automatically.

In Continuous Integration, after reducing the size of merge-conflicts to reduce risks, we are left with literally everything else that could go wrong with software. There are many types of bugs that may happen, with different degrees of difficulty to discover. Correspondingly, there are many types of tests with different associated costs, such as unit tests, integration tests, etc.

Therefore the problem is really a Sequencing Problem: which is the optimal sequence to discover bugs. Continuous Integration solves this by sequencing first that which adds value most cheaply. Unit tests are the cheapest types of tests that add the most value: an application cannot function properly if its individual programs (or units) do not function properly.

In continuous integration, after the developers merge the code into a shared repository, unit tests are executed in order to verify if the programs are working as expected. Let’s say you want to build a stabbing robot because you’ve just seen John Wick. The robot is controlled by a stabbing application.

I mentioned that unit tests are used to verify programs. Here’s where the difference between programs and applications is relevant. A program is a set of instruction that can be executed on a computer, while an application is a set of useful programs that help people perform functions, tasks or activities.

To build a stabbing application, a developer will develop a set of programs that are orchestrated in a way that produces a stabbing behavior. For example, there may be a program that picks up a knife, the program that lifts the right arm, the program that lowers the right arm.

In order for the stabbing application to work as expected, the individual programs must work as expected. Testing individual programs is much much faster than testing the application. Imagine a production line where the first phase verifies if the knife is taken by the robot, the second phase verifies if the robot’s right arm is at a certain height, and the third phase verifies if the robot’s right arm is at a lower height. The analogy emphasizes the fact that testing individual programs can be done automatically and fast. In the software world, that part of the production line is a set of unit tests.

When a developer pushes the code into a shared repository, a set of unit tests will be executed. These unit tests will test not only the program, but test all programs that have been developed previously. This is to prevent a program invalidates another program. Imagine the robot has another drinking program that keeps it alive by drinking from the bottle of oil on the left hand. If during the development of the stabbing application, the right arm grabs the knife but the developer made the left arm let go of the bottle of oil. This means that the stabbing application is invalidating the drinking application.

Therefore, Continuous Integration helps reduce risks by preventing programming errors from going downstream.

Continuous Testing

Traditionally, software is developed to enable business process. Inventory software enables better inventory process, Powerpoint enables better presentation.

However, businesses have come to find softwares to be the primary differentiator.

Another example is mobile check deposit applications. In 2011, top banks were racing to provide this must-have feature. By 2012, mobile check deposit became the leading driver for bank selection (Zhen, 2012). Getting a secure, reliable application to market was suddenly business critical. With low switching costs associated with online banking, financial institutions unable to innovate were threatened with customer defection.

Achieving a differentiable competitive advantage by being first to market with innovative software drives shareholder value. Therefore, business wants to get software to the market faster and faster.

However, software become complex very quickly. Risks of failures associated with software increase through every release. Without understanding of the risks, decisions may be made that cause a loss in shareholder values for business:

Parasoft analyzed the most notable software failures in 2012 and 2013; each incident initiated an average -3.35% decline in stock price, which equates to an average of negative $ 2.15 billion loss of market capitalization. This is a tremendous loss of shareholder value.
DevOps: Are you pushing bugs to your clients faster ?

To avoid the loss of Shareholder Value, business needs to understand the risk associated with the software at any given point in time so that trade-offs can be taken into account when making decisions. But it takes a lot of experience and practice to understand the development of software and to appreciate the complexity in doing so. This makes it hard to communicate technical decisions such as to refactor code because it it protects developers against invisible obstacles.

Therefore we need to bridge the gap between what the business expects from the software versus what developers produce. This is what drives continuous testing – the need for business to understand the risk associated with software at any given point in time.

Continuous Delivery

Continuous Delivery is the ability to get changes into the hands of users in a safe, fast and sustainable manner.

Small batch size is desirable

It speeds up learning. Smaller batches mean faster feedback, which means faster learning. A developer receives feedback about the quality of his work faster if his work is small and can be tested quickly.

It improves productivity. When a developer is working on a task, and is interrupted by an important bug from a task that he finished sometimes ago but just now tested, he has to switch context which incurs attention residue.

It makes it easier to fix a problem. Fewer features in a release means that when a faulty behavior occurs we can more quickly and easily identify which feature it originates from. For a release with dozens of features, it is more difficult to track down the cause of a particular behavior.

It allows us to drop features and avoid sunk cost fallacy, which is when individuals continue a behavior because of previously invested resource (time, money, efforts).

The optimal batch size is where the aggregation of transaction cost and holding cost is minimal.

For example, you go to a store to buy woods for the winter. You have two choices:

  • To go to the store once and buy a lot of woods. The transaction cost is low (one-time fuel money), big size batch and high holding cost (since you’ll need a place to store the wood, and preserve it so that it doesn’t become unusable).
  • To go to the store multiple times and buy a little of wood each time. The transaction cost is high (more money more fuel), small size batch and low holding cost.

The optimal batch size is where the aggregation of transaction cost and holding cost is minimal. U-curve has a flat bottom, therefore missing the exact optimum costs very little. This insensitivity is practically important because it’s hard to have accurate information.

In software development, transaction cost can be reduced by using automated testing, reducing transaction cost shifts the optimal batch size to the left, which means working with smaller batch size is economically justified.

Finally, holding costs usually increase faster than linear rate in product development. For example, it’s expontentially harder to locate the root cause of a bug when a particular build contains more features. (smaller changes mean less debug complexity).Market is very unpredictable, and delaying a particular feature may lose us competitive advantages.

Transaction costs in Testing phase

Running a test cycle incurs fixed transaction costs such as:

  • Building features into testable package.
  • Initialize and configure test environments.
  • Populating test data.
  • Running regression tests.

Continuous Delivery is about making small batches economically viable

Because small batch size is desirable, but there are always transaction costs in testing phase which makes small batches economically unattractive, a key goal of continuous delivery is to change the economics of the software delivery process to make it economically viable to work in small batches so we can obtain the many benefits of this approach.

Why I built Another Writing Application

Another Writing Application

Updated: Sometimes the backend is turned off automatically, I check in frequently to make sure it’s up. If you’re not able to put your writing references into the application leave a comment and I’ll check the backend. If you’re concerned with data privacy, use the application locally, please visit its Github repository.

Why though ?

I think the ability to find insights give individuals unique competitive advantages. As someone who wants to thrive in this world, I decided that I want to obtain insights, at least in software development (which is what I do for a living).

To find insights, you need to think effectively. To think effectively, you must make your thinking tangible, so that you can look and see what’s ineffective. As far as I know, writings are the only tangible outcomes of thinking. Therefore I write a lot. However, writing is so difficult that, not all of my high-quality writings get published, and not all of my published writings are of high quality.

When I write, I tend to read a lot of sources, oscillating between them as needed to compare and contrast ideas. After having some interesting thoughts, I will write them down. But such thoughts are often ostensible, or they hint at possibly new ways of interpreting existing information. So I switch back to the sources to reconcile the new thoughts with the sources.

Sometimes the sources talk about multiple subjects, but I am only interested in one or just some keywords, I need to switch between them to look for the keywords and then read the surrounding text block. When you are pulling information from a lot of places, such switching increases cognitive load significantly, which reduces the processing power you can spend on actual thinking.

I thought about it, and I think what is lacking is a workspace where I can search for keywords from relevant sources and write my thoughts, without having to leave the tab. Another Writing Application is designed to be such a workspace.

The main features of Another Writing Application is Search Focus mode for retrieving sources containing specific terms. You can read the surrounding text blocks in Search Focus mode, or you can switch to Whole Text mode to read the entire thing if you like. Additionally, you can write your thoughts and have them autosaved, all without ever leaving the workspace.

Another Writing Application isn’t a note-taking tool. For taking notes, I used Roam Research obsessively. However, Roam is a note-taking tool, and it’s not a writing workspace that serves the purpose of gathering sources and experiment with thoughts. On the opposite, you have to be mindful what to install into Roam, because it is designed to build a long-lasting repository , if you’re following the Zettlkasten method.

Another Writing Application is built as a place where you can dump your disorganized thoughts, organize them and then dump the organized thought into Roam or other places. In fact, I wrote this article using AWA, with 7 references. It is not intended to replace anything, just an attempt in making writing, and consequently thinking, more convenient.

Therefore, gathering sources, read, search for and experimental writing, all in the same place, is what Another Writing Application is for.

The application is publicly available here.


Add Source

When you add an URL to AWA, it calls the server to extract content using Mercury Parser and insert that content into your local storage. The backend doesn’t store anything, it just returns the extracted content. As you read your sources for the first time, drop the URL into this and continue reading.


When you have an interesting narrative, write it down. If you hit a term that summarizes a broad topic which you’re trying to articulate, search for that term.

By default, search-focus mode is used. Search-focus mode separates a given source into paragraph blocks, and only display the blocks that contain the searched term. You can expand other blocks to see the surrounding context.

If you want even more broader context of the searched result, switch to whole-text mode to see the entire text of the source.

Export Data

You can export the data in json format. The exported file contains additional metadata extracted using Mercury Parser. Your writing will always has the ID curren_note.

Changing location of sidebar

Some enjoys the sidebar on the right (like Roam).

But some would enjoy the sidebar on the left. You can change it either way. Please let me know which one you prefer more.

Preview Markdown

Using Marked to produce a HTML string from your writing and display it in the modal.


See anything you don’t like ? Please feedback so that I can improve it. I use SmtpJs to send the email, using my own email, so it is anonymous.

The application is publicly available here.

Technology stack:

I love Hyperapp by the way. It’s a minimalist approach to building web application. The concepts that you need to learn are way less than React and other front-end frameworks.

Timline and tasks

I use Agenda to keep my to-do and agenda. The entire process took me 6 days.

There are bug fixes and features that I don’t explicitly add to the list, because I was in the flow.



Netlify (Initial choice and final choice)

I chose Netlify as a static hosting solution because its free tier seems sufficient.

Github page (Dropped due to weird styling thing)

Somehow, my website on Github page is not styled exactly as what I see in my local development, while the version hosted on Netlify looks exactly the same.


Heroku (Initial choice)

My backend is just a NodeJS application with Express, Cors (for local use) and Mercury Parser as dependencies.

Initially, I deployed the backend to Heroku. The deployment was really simple, which was good. However, Heroku hibernates your app once in a while, and your app must sleep a certain amount of time within 3 days. In short, availability wasn’t guaranteed. Even though this is an open-sourced project and monetization isn’t the goal, I want it to be available. The unreliability of Heroku was a big demotivator for me, so I looked for an alternative.

I looked into Netlifly cloud functions. However, there was a limitation on the number of requests and number of running time. Then I thought that “free server hosting” was too broad a search phrase. My backend is a simple NodeJS-Express application. With that in mind, I looked into “free nodejs app hosting”, and after a bit of browsing, I stumbled across openode. It offers a free-tier for open-sourced projects. A quick google search did not reveal any limitation about availability, as least not so much that people would make such complaints available on Google search. I decided to go with openode.

Openode (Final choice)

One thing I enjoyed about openode is that the deployment process is available through a commandline tool. Not too much up-front knowledge to be learnt for most NodeJS app developers. However, it wasn’t without friction.

The application is publicly available here.

Final words

Building this application has really been an interesting challenge to me. I have had the opportunity to increase my problem solving, prototyping, time management skills, as well as how to use deliver an application from inception to delivery.

Let me know if you have any feedback !

Concurrency control

Concurrency control

Imagine two users trying to access an employee table in the company’s database. One is requesting the total salary of the employees in order to transfer the money into their bank accounts. The other, upon receiving an email from the boss saying that he was impressed by John and decides to give him a raise, goes on modifying John’s salary cell in the database. No one knows how it goes, but suppose the transfer occured after the update. John ends up not getting the extra money that he earned this month, the guy who were told to give John more money probably is going to be fired. No one is happy.  This is also known as the incorrect summary problem.

Then maybe the boss rethinks his decision and finds himself being too hasty. The boss calls the other guy again and asks him to revert John’s salary to normal. Suppose the money-guy saw the previous update and went on transfering money based on the database table that he’s seen.  This leads to inconsistency in the realities of people involved. The boss thinks he’s smart , John is happy and thinks that the boss is dumb. A transaction writes a value that later gets aborted, but another transaction accessing the same element is not aware of this change and reads the aborted value. This is known as the dirty read problem.

If we allow two transactions to access a database element at the same time, God knows what will happen and we can only hope that everyone gets what they want. But hoping is never economical, so we have to come up with a solution to specify what each transaction can do and to which scope.

The solution is Concurrency Control.  The concept of concurrency control provides rules, methods and methodologies to maintain consistency of database components.

Concurrency control ensures that database transactions are performed concurrently without violating data integrity of the respective databases. Maror goals of concurrency control are: serializability and recoverability. 

Database transactions & Schedule

A database transaction is a unit of work that encapsulates a number of operations with defined boundaries in terms of which code are executed within that transaction. A transaction is designed to obey the ACID properties:

  • Atomicity: Either everything is executed, or nothing is executed.
  • Consistency: Every transaction must take the database from one consistent state to another consistent state.
  • Isolation: Transactions cannot interfere with each other, and the effect of a transaction is only visible to other transactions after it’s sucessfully executed.
  • Durability: All sucessful transactions must have effects that persist even after crashes.

Database transaction, at any given moment, is in one of the following states:


After rollback, a transaction can be restarted at an appropate time if no internal logic occurs.

Database transactions are arranged within a schedule.


Before we discuss the details, first there’s an obvious question: Why would a schedule is serializable while it can just be serial ?

If we execute transactions serially, then no problem would arise because the output of a transaction is the input of another. No inconsistencies, no abnormalities, no anomalies.

However, there are times when one transaction wants to read from the disk, and one transaction wants to use CPU to compute some value. If we only allow them to serially, then one transaction must remain idle although it does not interfere with the other transaction in any sense so there could not be any consistency. This leads to the problem of low disk utilization and low transactions througput.

That’s why we would prefer the schedule to be serializable rather than serial. Serializability ensures that the outcome is equivalent to transactions executed serially. How the transactions are arranged internally can be done differently and more effectively.

There are two major types of serializability:

  • View-serializability.
  • Conflict-serializability.

A schedule that is conflict-serializable if it’s conflict-equivalent to a serial schedule, i.e. there exists some swappings of non-conflicting pair of opeartions that make the schedule serial. 

Any schedule that is conflict-serializable is view-serializable, but not necessary the opposite. Therefore we just need to ensure conflict-serializability in general.

Precedence graph

We need to be able to detect conflict-serializability before we can do anything. One of the tests is precedence graph.

Precedenge graph for a schedule S contains:

  • A node for each committed transaction in S.
  • An arc from T1 to T2 if an operation in T1 precedes and is in conflict with an operation in T2.

An operation is either a read or writeTherefore it follows naturally that a pair of conflicting operations is a pair of read-write or write-read or write-write. Each of these three pair of operations, if executed in reverse order, would produce a different result. That’s the whole idea of conflict.


How exactly does the precedence graph helps us with detecting conflict-serializability.

We need to prove the following:

  • If a precedence graph has no cycles, then the schedule is conflict-serializabile.

Proof by induction:


  • If a schedule has a cycle, then the schedule is not conflict-serializable.

Proof by contradiction.


Therefore we have proved that the precedence graph is cyclic if and only if serializability is violated.


Locking is a mechanism used to prevent inconsistencies or data corruption caused by simultaneous accessing of transactions. A database system should be engineered in such a way that each lock is held as short as possible.

From database’s perspective, there are three types of lock that is used in locking:

  • Read lock ( Share lock ): A bunch of locks can bind to a database element. This is the kind of lock that is requested immediately before a transaction needs to read an element. Read lock is immediately released as soon as the reading is done. However, the element that is share locked cannot be exclusively locked.
  • Write lock ( Exclusive lock ): Only one lock can bind to a database element. Write lock does not share its object with any other kind of lock. This kind of lock is requested immediately before a transaction needs to write to an element, but this lock can be released as late as the transaction’s life and not necessarily immediate.
  • Update lock: Update lock is a hybrid of read lock and write lock. A transaction requests an update lock on a database element when it predicts that it would want to exclusively lock the element, but it does not have to do so in the mean time.

So perhaps you’re wondering: Okay if you want to read an element, use read lock on it, if you want to write to an element, use write lock on it. When you’re done using the element then unlock it. No problem ! But wait, why is update lock needed ?

To understand this, you have to know that most of the Database Management Systems support upgrade mode: The idea is useful when you request a SELECT query, meaning you read the database elements without needing to modify it yet, then afterwards when you’re finished calculating something you request an UPDAE query.  If we go with the usual types of locks ( read and write lock ), then you would read lock the object, then unlock the object, then write lock the object again.

But things turn really nasty between the miliseconds after you release the read lock and before you start the write lock, what if another transaction somehow gets scheduled and requests a write lock on the element. This means that we just got our needed element stolen before the our query is done. If we wait until the thief transaction finishes, then maybe the change that transaction made happened before the change we want in the UPDATE query but we actually want them executed in reverse.

There lies the motiation for upgrade mode: When a transaction read locks an element, and later it wants to modify that element, it waits for read locks of other transactions to be released and then it can throw in a write lock without having to release its own read lock. This way the problem we mentioned is solved.

Back to the question we were asking: why is update lock needed ?  There is still an issue. Suppose a schedule containing of just two transactions that have read locked an element. Now T1 wants to modify ( upgrade lock ) the element, but cannot since T2 already read locked it. So we wait until T2 finishes. T2 wants to modify ( upgrade lock ) the element , too, but cannot, since T1 already read locked it. There is no transaction left in the schedule so everything is delayed indefinitely. This is called a deadlock.

Here’s a fine picture describing deadlock.


Source: Levent Divilioglu’s answer

To resolve this issue, we need a new type of lock. But what exactly are the conditions that this new type of lock needs to meet in order to prevent deadlock. Because deadlock arises whenever two upgradable locks want to throw in a write lock on attending element, the new type of lock needs to forbid this behavior .i.e. forbid write locks. Of course write lock can resolve the issue, but it also takes away the advantage of upgradable locks ( which is allowing others to read an element until the transaction wants to perform modification ), so we want a new type of lock that both is upgradable and forbids write locks from other transactions. That is exactly what the update lock is.

Update lock forbids write locks from other transactions while allowing read locks, it waits for other read locks to be released before throwing out a write lock without realeasing its own read lock. Update lock helps solve deadlock.

Two-phase locking

Remember the example we mentioned earlier when a transaction realeases a read lock and then in that split second before it throws a write lock another transaction goes in and write locks the element, causing undesirable result ?

Yeah well, that sort of problem arises whenver a transaction unlocks an element and another transaction immediately locks the element inappropiately. Therefore two-phase locking solves this problem by separating the locking process into two phases:

  1. Growing: A transaction locking elements, the number of locks only increases and never decreases.
  2. Shrinking: A transactions releasing elements, the number of locks only decreases and never increases.

It means that once a transaction releases an element, its relationship with that element ends and hence it is safe for other transactions to take control of the element.

There are 3 types of two-phase locking.


Most Database Management System implements Rigorous type. Therefore we would assume Rigorous 2-phase locking from now on.

Read more on 2-phase locking here.


Concurrency control has something to do with recoverability too. Suppose a transaction has written some data to the disk and later that transaction aborts, the written data will be undone to reflect the abort state of the transaction. But what happens if a crash occurs right after that transaction has written the data. If we don’t know what the transaction were doing before the crash, we have no way to tell if that transaction were going to be aborted had everything gone fine. If we don’t know, then there’s no reason to undo the data unwritten, this means this data, which belongs to an incorrect database state, now is part of our system. This will lead to many inconsistencies as time proceed.

This calls for a method to record exactly what the transactions are doing in real time. And logging is such a method.

A log is a sequence of records about what transactions have done. Log is stored in the main memory and will be written to disk as soon as possible, so we don’t have to worry abouot log getting lost in the crash.


A transaction log contains four statements:

  • < start T > 
  • < commit T > Transaction T has completed successfully and will make no attempt to modify database elements.
  • < abort T > Transaction T could not complete successfully.
  • < T , X , v > or < T , X , v , w > Transaction T has changed database element X from value v to value w.

A transaction has the following primitive operations:


Log only records when W(X,t) occur not when O(X) occur. This means that log does not necessarily reflect the actual values on the disk. When a crash happens, we cannot tell if a transaction has written a value to the disk or not from the log alone.

1) Undo logging

Order of execution in undo logging:

< T , x , val > read transaction T changes x from val. 


Recovery with undo logging: Classify the transactions in the log as either committed or uncomitted.

  1. If we see < commit T > then we know from the image above that everything before < commit T > has been written to disk, so ignore it.
  2. If we see < start T > without a < committ T > then transactions might have written something into disk, so we undo it by replace X to v by looking at < T , x , val >.
  3. Then after we’re done with that uncommitted transaction, we write an < abort T > at the end.
  4. Repeat until no transaction is uncommitted ( either committed or aborted ).


2) Redo logging

Undo logging has a disadvantage which is it requires to write all the changes to disk before a transaction can commit. This means we have to access the disk everytime before transactions commit, which is costly. To avoid this, we have a different mode of logging called redo logging.

Order of execution in redo logging ( circles indicate things that differ from undo )

< T , x , val > read transaction T changes x to val. ( as opposed to from in undo )


Recovery with redo logging:

  1. Identify committed transactions.
  2. Scan the log from the beginning forward. For each < T , X , v > encountered:
    1. If T is not committed, ignore.
    2. If T is committed, write value v to X.
  3. For each incompleted transaction, write < abort T > and flush the log.

3) Nonquescient checkpoint and recovery


In short, elements affected by T1…Tk got written into disk after the start ckpt for undo. And the elements affected by transactions before the start ckpt is written into disk for redo. 


Undo logging ignores committed transactions and undo uncommitted transactions.

Redo logging ignores uncommited transactions and redo committed transactions.