Reading Notes – The super tiny compiler

Source

I just finished the reading of this file. Pretty good explanation about the compiler. Before reading this blog, I thought the concept of the compiler is to find the term of a particular language and replace it with the term of the target language. I have written a small compiler (from Perl to Python) during my uni study. At that time I was a beginner and used a lot of if conditions to handle different situations…

The code example is using a similar solution. But the concept of it really makes sense. So basically we need to handle the input step by step and transform the input into a general format which can be reused to different other languages.

Basic Concept

Most compilers break down into three primary stages: parsing, transformations and code generation

  • Parsing is taking raw code and turning it into a more abstract representation of the code.
  • Transformation takes this abstract representation and manipulates to do whatever the compiler wants it to.
  • Code Generation takes the transformed representation of the code and turns it into new code.

Parsing

PArsing typically gets broken down into two phases: Lexical Analysis and Syntactic Analysis

  • Lexical Analysis takes the raw code and splits it apart into these things called tokens by a thing called a tokenizer (or lexer).

Tokens are an array of tiny little objects that describe an isolated piece of the syntax. They could be numbers, labels, punctuation, operators, whatever

  • Syntactic Analysis takes the tokens and reformats them into a representation that describes each part of the syntax and their relation to one another. This is known as an intermediate representation or Abstract Syntax Tree.

An Abstract Syntax Tree, or AST for short, is a deeply nested object that represents code in a way that is both easy to work with and tells us a lot of information.

e.g.

* For the following syntax:
 *
 *   (add 2 (subtract 4 2))
 *
 * Tokens might look something like this:
 *
 *   [
 *     { type: 'paren',  value: '('        },
 *     { type: 'name',   value: 'add'      },
 *     { type: 'number', value: '2'        },
 *     { type: 'paren',  value: '('        },
 *     { type: 'name',   value: 'subtract' },
 *     { type: 'number', value: '4'        },
 *     { type: 'number', value: '2'        },
 *     { type: 'paren',  value: ')'        },
 *     { type: 'paren',  value: ')'        },
 *   ]
 *
 * And an Abstract Syntax Tree (AST) might look like this:
 *
 *   {
 *     type: 'Program',
 *     body: [{
 *       type: 'CallExpression',
 *       name: 'add',
 *       params: [{
 *         type: 'NumberLiteral',
 *         value: '2',
 *       }, {
 *         type: 'CallExpression',
 *         name: 'subtract',
 *         params: [{
 *           type: 'NumberLiteral',
 *           value: '4',
 *         }, {
 *           type: 'NumberLiteral',
 *           value: '2',
 *         }]
 *       }]
 *     }]
 *   }
 */

Transformation

This just takes the AST from the last step and makes changes to it. It can manipulate the AST in the same language or it can translate it into an entirely new language.

There are these objects with a type property. Each of these is known as an AST Node. These nodes have defined properties on them that describe one isolated part of the tree.

* We can have a node for a "NumberLiteral":
 *
 *   {
 *     type: 'NumberLiteral',
 *     value: '2',
 *   }
 *
 * Or maybe a node for a "CallExpression":
 *
 *   {
 *     type: 'CallExpression',
 *     name: 'subtract',
 *     params: [...nested nodes go here...],
 *   }
 *
 * When transforming the AST we can manipulate nodes by
 * adding/removing/replacing properties, we can add new nodes, remove nodes, or
 * we could leave the existing AST alone and create an entirely new one based
 * on it.

Traversal

In order to navigate through all of these nodes, we need to be able to traverse through them. This traversal process goes to each node in the AST depth-first.

*   {
 *     type: 'Program',
 *     body: [{
 *       type: 'CallExpression',
 *       name: 'add',
 *       params: [{
 *         type: 'NumberLiteral',
 *         value: '2'
 *       }, {
 *         type: 'CallExpression',
 *         name: 'subtract',
 *         params: [{
 *           type: 'NumberLiteral',
 *           value: '4'
 *         }, {
 *           type: 'NumberLiteral',
 *           value: '2'
 *         }]
 *       }]
 *     }]
 *   }
 * So for the above AST we would go:
 *
 *   1. Program - Starting at the top level of the AST
 *   2. CallExpression (add) - Moving to the first element of the Program's body
 *   3. NumberLiteral (2) - Moving to the first element of CallExpression's params
 *   4. CallExpression (subtract) - Moving to the second element of CallExpression's params
 *   5. NumberLiteral (4) - Moving to the first element of CallExpression's params
 *   6. NumberLiteral (2) - Moving to the second element of CallExpression's params
 *

Visitors

* The basic idea here is that we are going to create a “visitor” object that
 * has methods that will accept different node types.
 *
 *   var visitor = {
 *     NumberLiteral() {},
 *     CallExpression() {},
 *   };
 *
 * When we traverse our AST, we will call the methods on this visitor whenever we
 * "enter" a node of a matching type.
 *
 * In order to make this useful we will also pass the node and a reference to
 * the parent node.
 *
 *   var visitor = {
 *     NumberLiteral(node, parent) {},
 *     CallExpression(node, parent) {},
 *   };
 *
 * As we traverse down, we're going to reach branches with dead ends. As we
 * finish each branch of the tree we "exit" it. So going down the tree we
 * "enter" each node, and going back up we "exit".
 *
 *   -> Program (enter)
 *     -> CallExpression (enter)
 *       -> Number Literal (enter)
 *       <- Number Literal (exit)
 *       -> Call Expression (enter)
 *          -> Number Literal (enter)
 *          <- Number Literal (exit)
 *          -> Number Literal (enter)
 *          <- Number Literal (exit)
 *       <- CallExpression (exit)
 *     <- CallExpression (exit)
 *   <- Program (exit)

Code Generation

 * The final phase of a compiler is code generation. Sometimes compilers will do
 * things that overlap with transformation, but for the most part code
 * generation just means take our AST and string-ify code back out.
 *
 * Code generators work several different ways, some compilers will reuse the
 * tokens from earlier, others will have created a separate representation of
 * the code so that they can print node linearly, but from what I can tell most
 * will use the same AST we just created, which is what we’re going to focus on.
 *
 * Effectively our code generator will know how to “print” all of the different
 * node types of the AST, and it will recursively call itself to print nested
 * nodes until everything is printed into one long string of code.

The code example can be found at the source link

Reading Note – Cache and Downgrade solution for distribution system

Cache

there is a good explanation about multi-level cache

  • (L1) Level 1 Cache(2KB – 64KB) – Instructions are first searched in this cache. L1 cache very small in comparison to others, thus making it faster than the rest.
  • (L2) Level 2 Cache(256KB – 512KB) – If the instructions are not present in the L1 cache then it looks in the L2 cache, which is a slightly larger pool of cache, thus accompanied by some latency.
  • (L3) Level 3 Cache (1MB -8MB) – With each cache miss, it proceeds to the next level cache. This is the largest among the all the cache, even though it is slower, it’s still faster than the RAM.

Downgrade solution for distribution system

Concept

In order to make sure the main service is still available during extreme situations such as high pressure, server error, or breaking workflow, we can design the downgrade service. The downgrade service will only provide limited services.

The server can automatically downgrade the service regarding key variables. We can also implement a manual solution to switch on/off downgrade service.

Some services can not be downgraded.

Category

  • automatically or manually
  • read service or write service
  • multiple level or single level

Scenario

  • normal: service is not stable because of upgrade and internet. Use automatical solution
  • warning: the stability range from 95% to 100%。 Use automatical solution or manual solution
  • error: availability less than 90%. It may be caused by DB connection pool is full or request is too large. Use automatical solution or manual solution
  • serious error: manual solution

source

Reading Note – Interface development for getting products distributed services

Cache Design

  • The product data will be saved to cache (redis)
  • Provide the interface for refreshing interface
  • During the refreshing of redis, the front end should still have data
  • Data cache should have a version number. Every time the cache update should generate a new cache key
  • Save the cache key to the list which can filter the latest version by current time

API Design

  • interface parameter: version and page index
  • response data: data and current version
  • use sorted set to get the latest version number
  • the first time request will response the current latest version number
  • the client will cache the version number. The next page request will need the cached version number
  • return 304 if the client request the first page and local version number equal to the current version number. So the client can avoid re-rendering the page
  • if no version number within the request, Use the latest one

Tips:

  • the infinite increase of version number: the version number will keep increasing. So we need to delete the version number of last day during the update. Use sorted set to filter the version number
  • when getting the latest version, capturing the version set for the last hour. Then use a filter to get the latest one. When getting the data for the product, find the version number which contains the data if the product does not exist in current version
  • SLB: Server Load Balancing

Structure

system design

Summary

Factors need to be considered:

  • The performance of the interface
  • The stability of the interface
  • Error Tolerance
  • Server side pressure: reduce the pressure of the server side
  • Service downgrade: during high pressure

Best practice for sass

The project front end is using vue vuex and sass. So here are sole rules about best practice:

  • Variable – can be used for standard layout, e.g. margin border font and coler
  • Nest – Regarding different modules, nest structure is easy to search and the structure is clear. It’s good for component dev
  • Reuse – reusing of the same include will lead to code redundancy

e.g.

@mixin clearfix{
 
  &:before,&:after{
 
    display:block;
    content:'';
    height:0;
    clear:both;
  }
 
}
 
.container{
    @include clearfix;   
}
 
.tab{
    @include clearfix;
}

instead of the above code, we should do:

@mixin clearfix{
 
  &:before,&:after{
 
    display:block;
    content:'';
    height:0;
    clear:both;
  }
}
 
.container,.tab{
    @include clearfix;   
}

WordPress Self-defined Plugin

Composer

PHP composer has a native support for wordpress plugin:

"package": {
    ...
   "type":"wordpress-plugin"
   ...
}

This will automatically install the composer package within wordpress plugin folder.

Plugins structure

Instead of generating multiple plugins for the wordpress, we can implement multiple extensions within one single plugin. The strucutre will look like:

Plugin
  |----woocommence extension
  |----thrid party auth
  |----analysis
  |---- ...

We can also implement a page to manage the plugin and turn on/off the extension.

Improve the performance

We can use plugin for caching the wordpress website. The cache can be used in front end, db and anywhere else.

Tips:

  • It’s not a good practice to change the function within the theme.php because it may affect the user access. Besides, once we change the theme it may cause issues. So we should use plugins instead of theme.
  • Security. We can hide and change the api url for admin and other security module.

Best Practice of Dockerfile

 

  • write .dockerignore file
  • one container for one application
  • combine multi RUN commands into one
  • avoid using default latest tag
  • remove temp files after each RUN command
  • set WORKDIR and CMD
  • (optional) use ENTERPOINT
  • use COPY instead of ADD
  • adjust the order of COPY and RUN (e.g. copy package.json first and then run npm install then copy the other source code)
  • use health check (optional)

Auto Deployment with Docker Image (2)

Setup QA Deployment and Service

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: XXX
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app:XXX
    spec:
      containers:
      - image: mongo
        name: XXX
        ports:
        - name: mongo
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          gcePersistentDisk:
            pdName: XXX
            fsType: ext4

#db-service.yml
apiVersion: v1
kind: Service
metadata:
  labels:
    name: XXX
  name: XXX
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    name: XXX

# web-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: XXX-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: XXX
    spec:
      containers:
      - image: gcr.io/XXX/XXX
        name: XXX
        ports:
        - containerPort: 8080
          name: http-server
# web-service.yml
apiVersion: v1
kind: Service
metadata:
  name: XXX
  labels:
    name: XXXX
spec:
  type: LoadBalancer
  loadBalancerIP: XXX
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: XXX

Setup the persistent storage

Tips: if the cluster only have one node and the performace is not good enough, there may be a issue about “failed to fit in any node fit failure on node “

Auto Deployment with Docker Image (1)

Branch-Oriented Deployment

Each new feature should a separate branch. Each branch should have its own image which can be deployed to the QA server for testing.

After fully testing, the branch should be merged with development and deployed to Dev Server

If everyone is satisfied with Dev Server version, then we deploy it to Staging Server.

Docker Image Repository

During the QA and Deployment process, docker image should be passed between different server in order to control the env difference

DB backup and replication

There should be an easy method to replicate the real data from Staging server and deploy it into the Dev and QA server.

Entire Workflow

Create a new branch

  • Create a new feature branch from development within bitbucket
  • The branch name should explain the current version, Jira task number, and date

Init Gcloud and Docker

  • Install the gcloud on your local
  • Update gcloud command line tools to the latest version
  • Install Docker on your local
  • Make sure the Docker is running

Check the Deployment Tool package for QA Deployment

/deploymentToolBox
 /QA
  /createAndPushImagesToContainerRepository.sh
  /Dockerfile
   

Execute the command

A Cluster has been created for QA and Staging

gcloud container clusters get-credentials sl-qa-staging-api-cluster --zone asia-east1-a --project stream-lending

An issue about ruby

In ruby symbol is different compared with string. You cannot directly use the string to call the attribute within on hash array.

The data type of hash array keys is symbol. Therefore, we need to convert string to the symbol to check if it’s existing in the array.

to_sym() public
Returns the Symbol corresponding to str, creating the symbol if it did not previously exist. See Symbol#id2name.

"Koala".intern         #=> :Koala
s = 'cat'.to_sym       #=> :cat
s == :cat              #=> true
s = '@cat'.to_sym      #=> :@cat
s == :@cat             #=> true
This can also be used to create symbols that cannot be represented using the :xxx notation.

'cat and dog'.to_sym   #=> :"cat and dog"

Vue and Vuex

Vuex

The workflow of store workflow

Good example

Basic Concepts

Store is the center of vuex. Store is where vuex saves all data together. Store will hold the application State.
When Vue components retrieve state from it, they will reactively and efficiently update if the store’s state changes.

const store = new Vuex.Store({
  state: {
    count: 0
  },
  mutations: {
    increment (state) {
      state.count++
    }
  }
})

You cannot directly mutate the store’s state. The only way to change a store’s state is by explicitly committing mutations.

If we want to execute the increment function,

store.commit('increment')

Vuex uses a single state tree which contains all your application level state and serves as the “single source of truth”.

This also means usually you will have only one store for each application

For the best practice, we can create a store folder.

store
  |_index.js
  |_action.js
  |_mutation.js
  |_getter.js
  |_mutation_types.js

Within the store/index.js, we set:

Vue.use(Vuex)

Then use it in main.js

new Vue({
	router,
	store,
}).$mount('#app')

By providing the store option to the root instance, the store will be injected into all child components of the root and will be available on them as this.$store

For Example:

const Counter = {
  template: `<div>{{ count }}</div>`,
  computed: {
    count () {
      return this.$store.state.count
    }
  }
}

mapState

mapState is a helper for vuex. When using the state within the component, it will be insufficient to declare getter function one by one. We can use mapState to simply map one this attribute to state.

e.g.

// in full builds helpers are exposed as Vuex.mapState
import { mapState } from 'vuex'

export default {
  // ...
  computed: mapState({
    // arrow functions can make the code very succinct!
    count: state => state.count,

    // passing the string value 'count' is same as `state => state.count`
    countAlias: 'count',

    // to access local state with `this`, a normal function must be used
    countPlusLocalState (state) {
      return state.count + this.localCount
    }
  })
}

Object Spread Operator

it’s kind of advanced function for mapState e.g.

computed: {
  localComputed () { /* ... */ },
  // mix this into the outer object with the object spread operator
  ...mapState({
    // ...
  })
}

getters

getters is a list of function defined in Store which can be reused in multiple component.

mapGetters

mapGetters is similar to mapState. It reduce the complexity of importing getters function.

Commit with Payload

mutations: {
  increment (state, payload) {
    state.count += payload.amount
  }
}
store.commit('increment', {
  amount: 10
})

Using Constants for Mutation Types

The best practice is to define mutation types within the project.

// mutation-types.js
export const SOME_MUTATION = 'SOME_MUTATION'
// store.js
import Vuex from 'vuex'
import { SOME_MUTATION } from './mutation-types'

const store = new Vuex.Store({
  state: { ... },
  mutations: {
    // we can use the ES2015 computed property name feature
    // to use a constant as the function name
    [SOME_MUTATION] (state) {
      // mutate state
    }
  }
})

Mutations Must Be Synchronous

But we can use async/await within the mutation functions

mapMutations

mapMutations is a better way to import mutation functions

import { mapMutations } from 'vuex'

export default {
  // ...
  methods: {
    ...mapMutations([
      'increment', // map this.increment() to this.$store.commit('increment')

      // mapMutations also supports payloads:
      'incrementBy' // this.incrementBy(amount) maps to this.$store.commit('incrementBy', amount)
    ]),
    ...mapMutations({
      add: 'increment' // map this.add() to this.$store.commit('increment')
    })
  }
}

Modules

Due to using a single state tree, all state of our application is contained inside one big object. However, as our application grows in scale, the store can get really bloated.

To help with that, Vuex allows us to divide our store into modules. Each module can contain its own state, mutations, actions, getters, and even nested modules – it’s fractal all the way down:

const moduleA = {
  state: { ... },
  mutations: { ... },
  actions: { ... },
  getters: { ... }
}

const moduleB = {
  state: { ... },
  mutations: { ... },
  actions: { ... }
}

const store = new Vuex.Store({
  modules: {
    a: moduleA,
    b: moduleB
  }
})

store.state.a // -> moduleA's state
store.state.b // -> moduleB's state

Action

Asynchronous logic should be encapsulated in, and can be composed with actions.

├── index.html
├── main.js
├── api
│   └── ... # abstractions for making API requests
├── components
│   ├── App.vue
│   └── ...
└── store
    ├── index.js          # where we assemble modules and export the store
    ├── actions.js        # root actions
    ├── mutations.js      # root mutations
    └── modules
        ├── cart.js       # cart module
        └── products.js   # products module

CSS Grid

CSS grid is a layput solution.

Link

System Design Practice (2)

Outline use cases, constraints, and assumptions

We need to ask following questions:

  • Who is going to use it?
  • How are they going to use it?
  • How money users are there?
  • What does the system do?
  • What are the inputs and outputs of the system?
  • How much data do we expect to handle?
  • How many requests per second do we expect?
  • What is the expected read to write ratio?

Tips:

Sometimes we need to do back-of-the-envelope calculation

Back-of-the-envelope calculations are estimates you create using a combination of thought experiments and common performance numbers to a get a good feel for which designs will meet your requirements.

Common rules:

  • Memory is fast and disk is slow (yea obviously)
  • Writes are 40 times more expensive than reads (that’s why we may need to separate write db and read db)

Powers of two table

Power ExactValue ApproximateVale Bytes 2^7 128
2^8 256 2^10 1024 1k 1KB 2^16 65536 64KB 2^20 1048576 1m 1MB 2^30 1billion 1GB 2^32 4GB 2^40 1 trillion 1TB