Reading Notes – The super tiny compiler

Source

I just finished the reading of this file. Pretty good explanation about the compiler. Before reading this blog, I thought the concept of the compiler is to find the term of a particular language and replace it with the term of the target language. I have written a small compiler (from Perl to Python) during my uni study. At that time I was a beginner and used a lot of if conditions to handle different situations…

The code example is using a similar solution. But the concept of it really makes sense. So basically we need to handle the input step by step and transform the input into a general format which can be reused to different other languages.

Basic Concept

Most compilers break down into three primary stages: parsing, transformations and code generation

  • Parsing is taking raw code and turning it into a more abstract representation of the code.
  • Transformation takes this abstract representation and manipulates to do whatever the compiler wants it to.
  • Code Generation takes the transformed representation of the code and turns it into new code.

Parsing

PArsing typically gets broken down into two phases: Lexical Analysis and Syntactic Analysis

  • Lexical Analysis takes the raw code and splits it apart into these things called tokens by a thing called a tokenizer (or lexer).

Tokens are an array of tiny little objects that describe an isolated piece of the syntax. They could be numbers, labels, punctuation, operators, whatever

  • Syntactic Analysis takes the tokens and reformats them into a representation that describes each part of the syntax and their relation to one another. This is known as an intermediate representation or Abstract Syntax Tree.

An Abstract Syntax Tree, or AST for short, is a deeply nested object that represents code in a way that is both easy to work with and tells us a lot of information.

e.g.

* For the following syntax:
 *
 *   (add 2 (subtract 4 2))
 *
 * Tokens might look something like this:
 *
 *   [
 *     { type: 'paren',  value: '('        },
 *     { type: 'name',   value: 'add'      },
 *     { type: 'number', value: '2'        },
 *     { type: 'paren',  value: '('        },
 *     { type: 'name',   value: 'subtract' },
 *     { type: 'number', value: '4'        },
 *     { type: 'number', value: '2'        },
 *     { type: 'paren',  value: ')'        },
 *     { type: 'paren',  value: ')'        },
 *   ]
 *
 * And an Abstract Syntax Tree (AST) might look like this:
 *
 *   {
 *     type: 'Program',
 *     body: [{
 *       type: 'CallExpression',
 *       name: 'add',
 *       params: [{
 *         type: 'NumberLiteral',
 *         value: '2',
 *       }, {
 *         type: 'CallExpression',
 *         name: 'subtract',
 *         params: [{
 *           type: 'NumberLiteral',
 *           value: '4',
 *         }, {
 *           type: 'NumberLiteral',
 *           value: '2',
 *         }]
 *       }]
 *     }]
 *   }
 */

Transformation

This just takes the AST from the last step and makes changes to it. It can manipulate the AST in the same language or it can translate it into an entirely new language.

There are these objects with a type property. Each of these is known as an AST Node. These nodes have defined properties on them that describe one isolated part of the tree.

* We can have a node for a "NumberLiteral":
 *
 *   {
 *     type: 'NumberLiteral',
 *     value: '2',
 *   }
 *
 * Or maybe a node for a "CallExpression":
 *
 *   {
 *     type: 'CallExpression',
 *     name: 'subtract',
 *     params: [...nested nodes go here...],
 *   }
 *
 * When transforming the AST we can manipulate nodes by
 * adding/removing/replacing properties, we can add new nodes, remove nodes, or
 * we could leave the existing AST alone and create an entirely new one based
 * on it.

Traversal

In order to navigate through all of these nodes, we need to be able to traverse through them. This traversal process goes to each node in the AST depth-first.

*   {
 *     type: 'Program',
 *     body: [{
 *       type: 'CallExpression',
 *       name: 'add',
 *       params: [{
 *         type: 'NumberLiteral',
 *         value: '2'
 *       }, {
 *         type: 'CallExpression',
 *         name: 'subtract',
 *         params: [{
 *           type: 'NumberLiteral',
 *           value: '4'
 *         }, {
 *           type: 'NumberLiteral',
 *           value: '2'
 *         }]
 *       }]
 *     }]
 *   }
 * So for the above AST we would go:
 *
 *   1. Program - Starting at the top level of the AST
 *   2. CallExpression (add) - Moving to the first element of the Program's body
 *   3. NumberLiteral (2) - Moving to the first element of CallExpression's params
 *   4. CallExpression (subtract) - Moving to the second element of CallExpression's params
 *   5. NumberLiteral (4) - Moving to the first element of CallExpression's params
 *   6. NumberLiteral (2) - Moving to the second element of CallExpression's params
 *

Visitors

* The basic idea here is that we are going to create a “visitor” object that
 * has methods that will accept different node types.
 *
 *   var visitor = {
 *     NumberLiteral() {},
 *     CallExpression() {},
 *   };
 *
 * When we traverse our AST, we will call the methods on this visitor whenever we
 * "enter" a node of a matching type.
 *
 * In order to make this useful we will also pass the node and a reference to
 * the parent node.
 *
 *   var visitor = {
 *     NumberLiteral(node, parent) {},
 *     CallExpression(node, parent) {},
 *   };
 *
 * As we traverse down, we're going to reach branches with dead ends. As we
 * finish each branch of the tree we "exit" it. So going down the tree we
 * "enter" each node, and going back up we "exit".
 *
 *   -> Program (enter)
 *     -> CallExpression (enter)
 *       -> Number Literal (enter)
 *       <- Number Literal (exit)
 *       -> Call Expression (enter)
 *          -> Number Literal (enter)
 *          <- Number Literal (exit)
 *          -> Number Literal (enter)
 *          <- Number Literal (exit)
 *       <- CallExpression (exit)
 *     <- CallExpression (exit)
 *   <- Program (exit)

Code Generation

 * The final phase of a compiler is code generation. Sometimes compilers will do
 * things that overlap with transformation, but for the most part code
 * generation just means take our AST and string-ify code back out.
 *
 * Code generators work several different ways, some compilers will reuse the
 * tokens from earlier, others will have created a separate representation of
 * the code so that they can print node linearly, but from what I can tell most
 * will use the same AST we just created, which is what we’re going to focus on.
 *
 * Effectively our code generator will know how to “print” all of the different
 * node types of the AST, and it will recursively call itself to print nested
 * nodes until everything is printed into one long string of code.

The code example can be found at the source link

Reading Note – Cache and Downgrade solution for distribution system

Cache

there is a good explanation about multi-level cache

  • (L1) Level 1 Cache(2KB – 64KB) – Instructions are first searched in this cache. L1 cache very small in comparison to others, thus making it faster than the rest.
  • (L2) Level 2 Cache(256KB – 512KB) – If the instructions are not present in the L1 cache then it looks in the L2 cache, which is a slightly larger pool of cache, thus accompanied by some latency.
  • (L3) Level 3 Cache (1MB -8MB) – With each cache miss, it proceeds to the next level cache. This is the largest among the all the cache, even though it is slower, it’s still faster than the RAM.

Downgrade solution for distribution system

Concept

In order to make sure the main service is still available during extreme situations such as high pressure, server error, or breaking workflow, we can design the downgrade service. The downgrade service will only provide limited services.

The server can automatically downgrade the service regarding key variables. We can also implement a manual solution to switch on/off downgrade service.

Some services can not be downgraded.

Category

  • automatically or manually
  • read service or write service
  • multiple level or single level

Scenario

  • normal: service is not stable because of upgrade and internet. Use automatical solution
  • warning: the stability range from 95% to 100%。 Use automatical solution or manual solution
  • error: availability less than 90%. It may be caused by DB connection pool is full or request is too large. Use automatical solution or manual solution
  • serious error: manual solution

source

Reading Note – Interface development for getting products distributed services

Cache Design

  • The product data will be saved to cache (redis)
  • Provide the interface for refreshing interface
  • During the refreshing of redis, the front end should still have data
  • Data cache should have a version number. Every time the cache update should generate a new cache key
  • Save the cache key to the list which can filter the latest version by current time

API Design

  • interface parameter: version and page index
  • response data: data and current version
  • use sorted set to get the latest version number
  • the first time request will response the current latest version number
  • the client will cache the version number. The next page request will need the cached version number
  • return 304 if the client request the first page and local version number equal to the current version number. So the client can avoid re-rendering the page
  • if no version number within the request, Use the latest one

Tips:

  • the infinite increase of version number: the version number will keep increasing. So we need to delete the version number of last day during the update. Use sorted set to filter the version number
  • when getting the latest version, capturing the version set for the last hour. Then use a filter to get the latest one. When getting the data for the product, find the version number which contains the data if the product does not exist in current version
  • SLB: Server Load Balancing

Structure

system design

Summary

Factors need to be considered:

  • The performance of the interface
  • The stability of the interface
  • Error Tolerance
  • Server side pressure: reduce the pressure of the server side
  • Service downgrade: during high pressure