WordPress Self-defined Plugin


PHP composer has a native support for wordpress plugin:

"package": {

This will automatically install the composer package within wordpress plugin folder.

Plugins structure

Instead of generating multiple plugins for the wordpress, we can implement multiple extensions within one single plugin. The strucutre will look like:

  |----woocommence extension
  |----thrid party auth
  |---- ...

We can also implement a page to manage the plugin and turn on/off the extension.

Improve the performance

We can use plugin for caching the wordpress website. The cache can be used in front end, db and anywhere else.


  • It’s not a good practice to change the function within the theme.php because it may affect the user access. Besides, once we change the theme it may cause issues. So we should use plugins instead of theme.
  • Security. We can hide and change the api url for admin and other security module.

Best Practice of Dockerfile


  • write .dockerignore file
  • one container for one application
  • combine multi RUN commands into one
  • avoid using default latest tag
  • remove temp files after each RUN command
  • set WORKDIR and CMD
  • (optional) use ENTERPOINT
  • use COPY instead of ADD
  • adjust the order of COPY and RUN (e.g. copy package.json first and then run npm install then copy the other source code)
  • use health check (optional)

Auto Deployment with Docker Image (2)

Setup QA Deployment and Service

apiVersion: extensions/v1beta1
kind: Deployment
  name: XXX
  replicas: 1
      - image: mongo
        name: XXX
        - name: mongo
          containerPort: 27017
          hostPort: 27017
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-persistent-storage
            pdName: XXX
            fsType: ext4

apiVersion: v1
kind: Service
    name: XXX
  name: XXX
    - port: 27017
      targetPort: 27017
    name: XXX

# web-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
  name: XXX-deployment
  replicas: 3
        app: XXX
      - image: gcr.io/XXX/XXX
        name: XXX
        - containerPort: 8080
          name: http-server
# web-service.yml
apiVersion: v1
kind: Service
  name: XXX
    name: XXXX
  type: LoadBalancer
  loadBalancerIP: XXX
    - port: 80
      targetPort: 8080
      protocol: TCP
    app: XXX

Setup the persistent storage

Tips: if the cluster only have one node and the performace is not good enough, there may be a issue about “failed to fit in any node fit failure on node “

Auto Deployment with Docker Image (1)

Branch-Oriented Deployment

Each new feature should a separate branch. Each branch should have its own image which can be deployed to the QA server for testing.

After fully testing, the branch should be merged with development and deployed to Dev Server

If everyone is satisfied with Dev Server version, then we deploy it to Staging Server.

Docker Image Repository

During the QA and Deployment process, docker image should be passed between different server in order to control the env difference

DB backup and replication

There should be an easy method to replicate the real data from Staging server and deploy it into the Dev and QA server.

Entire Workflow

Create a new branch

  • Create a new feature branch from development within bitbucket
  • The branch name should explain the current version, Jira task number, and date

Init Gcloud and Docker

  • Install the gcloud on your local
  • Update gcloud command line tools to the latest version
  • Install Docker on your local
  • Make sure the Docker is running

Check the Deployment Tool package for QA Deployment


Execute the command

A Cluster has been created for QA and Staging

gcloud container clusters get-credentials sl-qa-staging-api-cluster --zone asia-east1-a --project stream-lending

An issue about ruby

In ruby symbol is different compared with string. You cannot directly use the string to call the attribute within on hash array.

The data type of hash array keys is symbol. Therefore, we need to convert string to the symbol to check if it’s existing in the array.

to_sym() public
Returns the Symbol corresponding to str, creating the symbol if it did not previously exist. See Symbol#id2name.

"Koala".intern         #=> :Koala
s = 'cat'.to_sym       #=> :cat
s == :cat              #=> true
s = '@cat'.to_sym      #=> :@cat
s == :@cat             #=> true
This can also be used to create symbols that cannot be represented using the :xxx notation.

'cat and dog'.to_sym   #=> :"cat and dog"

Vue and Vuex


The workflow of store workflow

Good example

Basic Concepts

Store is the center of vuex. Store is where vuex saves all data together. Store will hold the application State.
When Vue components retrieve state from it, they will reactively and efficiently update if the store’s state changes.

const store = new Vuex.Store({
  state: {
    count: 0
  mutations: {
    increment (state) {

You cannot directly mutate the store’s state. The only way to change a store’s state is by explicitly committing mutations.

If we want to execute the increment function,


Vuex uses a single state tree which contains all your application level state and serves as the “single source of truth”.

This also means usually you will have only one store for each application

For the best practice, we can create a store folder.


Within the store/index.js, we set:


Then use it in main.js

new Vue({

By providing the store option to the root instance, the store will be injected into all child components of the root and will be available on them as this.$store

For Example:

const Counter = {
  template: `<div>{{ count }}</div>`,
  computed: {
    count () {
      return this.$store.state.count


mapState is a helper for vuex. When using the state within the component, it will be insufficient to declare getter function one by one. We can use mapState to simply map one this attribute to state.


// in full builds helpers are exposed as Vuex.mapState
import { mapState } from 'vuex'

export default {
  // ...
  computed: mapState({
    // arrow functions can make the code very succinct!
    count: state => state.count,

    // passing the string value 'count' is same as `state => state.count`
    countAlias: 'count',

    // to access local state with `this`, a normal function must be used
    countPlusLocalState (state) {
      return state.count + this.localCount

Object Spread Operator

it’s kind of advanced function for mapState e.g.

computed: {
  localComputed () { /* ... */ },
  // mix this into the outer object with the object spread operator
    // ...


getters is a list of function defined in Store which can be reused in multiple component.


mapGetters is similar to mapState. It reduce the complexity of importing getters function.

Commit with Payload

mutations: {
  increment (state, payload) {
    state.count += payload.amount
store.commit('increment', {
  amount: 10

Using Constants for Mutation Types

The best practice is to define mutation types within the project.

// mutation-types.js
// store.js
import Vuex from 'vuex'
import { SOME_MUTATION } from './mutation-types'

const store = new Vuex.Store({
  state: { ... },
  mutations: {
    // we can use the ES2015 computed property name feature
    // to use a constant as the function name
    [SOME_MUTATION] (state) {
      // mutate state

Mutations Must Be Synchronous

But we can use async/await within the mutation functions


mapMutations is a better way to import mutation functions

import { mapMutations } from 'vuex'

export default {
  // ...
  methods: {
      'increment', // map this.increment() to this.$store.commit('increment')

      // mapMutations also supports payloads:
      'incrementBy' // this.incrementBy(amount) maps to this.$store.commit('incrementBy', amount)
      add: 'increment' // map this.add() to this.$store.commit('increment')


Due to using a single state tree, all state of our application is contained inside one big object. However, as our application grows in scale, the store can get really bloated.

To help with that, Vuex allows us to divide our store into modules. Each module can contain its own state, mutations, actions, getters, and even nested modules – it’s fractal all the way down:

const moduleA = {
  state: { ... },
  mutations: { ... },
  actions: { ... },
  getters: { ... }

const moduleB = {
  state: { ... },
  mutations: { ... },
  actions: { ... }

const store = new Vuex.Store({
  modules: {
    a: moduleA,
    b: moduleB

store.state.a // -> moduleA's state
store.state.b // -> moduleB's state


Asynchronous logic should be encapsulated in, and can be composed with actions.

├── index.html
├── main.js
├── api
│   └── ... # abstractions for making API requests
├── components
│   ├── App.vue
│   └── ...
└── store
    ├── index.js          # where we assemble modules and export the store
    ├── actions.js        # root actions
    ├── mutations.js      # root mutations
    └── modules
        ├── cart.js       # cart module
        └── products.js   # products module

CSS Grid

CSS grid is a layput solution.


System Design Practice (2)

Outline use cases, constraints, and assumptions

We need to ask following questions:

  • Who is going to use it?
  • How are they going to use it?
  • How money users are there?
  • What does the system do?
  • What are the inputs and outputs of the system?
  • How much data do we expect to handle?
  • How many requests per second do we expect?
  • What is the expected read to write ratio?


Sometimes we need to do back-of-the-envelope calculation

Back-of-the-envelope calculations are estimates you create using a combination of thought experiments and common performance numbers to a get a good feel for which designs will meet your requirements.

Common rules:

  • Memory is fast and disk is slow (yea obviously)
  • Writes are 40 times more expensive than reads (that’s why we may need to separate write db and read db)

Powers of two table

Power ExactValue ApproximateVale Bytes 2^7 128
2^8 256 2^10 1024 1k 1KB 2^16 65536 64KB 2^20 1048576 1m 1MB 2^30 1billion 1GB 2^32 4GB 2^40 1 trillion 1TB

System Design Practice (1)

Today I went through the basic knowledge about sys design. Actually, it includes a well-structured skill-tree. The majority of those technological terms is quite familiar for me. During my daily reading, I have already heard or used some aspects such as replication, master-slaver, load balance and federation. But I didn’t have time to review and combine my working experience with the best practice.

I have been using MongoDB for few months and I need to quick pick up the concept of SQL DB. There are some incorrect concepts regarding NoSQL design. But I still learned a lot from it

Tomorrow I will go through all topics and terms again and some methodologies about system design best practice. But we need to start with a handy and quick try. Then I can figure out my weakness and what I need to learn tomorrow.

Let’s begin and have some fun!

Step 1: Outline Use Cases and constraints

First, we need to have a clear understanding of what kind of situations and scenarios we need to handle. That’s why we need to abstract the use cases.

Previously, I worked on the Revel-Xero integration project.

Use cases

Here are some scoped use cases:

  • User register and connect with Revel and Xero account
  • Service extracts sales records from Revel account
    • Updates Daily
    • Categorizes sales orders by Product, Product Class, and Establishment
    • Analysis monthly spending by category
  • Service generate sales invoice and push to Xero account
    • Pushes Daily
    • Allow users to manually set the account mapping and pushing schedule
    • Sends notifications when approaching or fail to send
  • Service has high availability

Now we have three use cases. The real scenario is much more complex than this one. It also includes sales, purchase orders, payroll and item sync. The invoice also has multiple formats and the sales, payment and tax mapping should be flexible enough. But it has a similar workflow. Let’s focus on the current situation

Constraints and assumptions

(Question: what’s the best practice of calculating the constraints and assumptions? Need research)

State assumptions

  • Usually once the account setup and begin to work, user only come back on a monthly basis
  • There is no need for real-time update. Revel is not a strong consistency system so we need to delay 1-2 days and then sync the data
  • Revel only have around 1000 customers in AU. But our target is the entire Asia market. So let’s assume 10 thousand users
    • Usually, one user will only have 1 establishment. So 10k establishment
    • Each establishment usually will have around 1000 sales orders per day. 10 million transactions per day
    • 300 million transactions per month
    • one user has one revel account and one xero account. so 10k revel account and 10k xero account
    • 20k read request per month
    • 100:1 write to read ratio
    • write-heavy, user make transactions daily but few visit the site daily.

Calculate Usage

In case we forget: 1 English letter = 1 byte, 2^8 = 1 byte, 1 Chinese letter = 2 bytes

  • Size per transaction:
    • user_id: 8 bytes
    • created: 8 bytes
    • product: 32 bytes
    • product_class: 10 bytes
    • establishment: 12 bytes
    • amount: 5 bytes
    • Total: ~ 75 bytes
  • 20 GB of new transaction content per month, 240 GB per year
    • 720 GB of new transaction in 3 years
  • 116 transaction per second on average
  • 0.017 read request per second on average

Handy conversion guide:

  • 2.5 million seconds per month
  • 1 request per second = 2.5 million requests per month
  • 40 requests per second = 100 million requests per month
  • 400 requests per second = 1 billion requests per month

Step 2: Create a high-level design

Outline a high-level design with all important components.

(To be continued… It’s 12 am now so I will try to finish it tomorrow!)

Kubernetes (4)

Kubernetes (4)

Setup controller and service

Now, we need to create a Replication Controller for the application. Because if a standalone Pod dies, it won’t restart automatically.

# web-controller.yml
apiVersion: v1
kind: ReplicationController
    name: web
  name: web-controller
  replicas: 2
        name: web
      - image: gcr.io/<YOUR-PROJECT-ID>/myapp
        name: web
        - containerPort: 3000
          name: http-server
kubectl create -f web-controller.yml

Then we need to create a service as an interface for those pods.

This is just like the “link” command line option we used with Docker compose.

# web-service.yml
apiVersion: v1
kind: Service
  name: web
    name: web
  type: LoadBalancer
    - port: 80
      targetPort: 3000
      protocol: TCP
    name: web

kubectl create -f web-service.yml
  • The type is LoadBalancer. This is a cool feature that will make Google Cloud Platform create an external network load balancer automatically for this service!
  • We map external port 80 to the internal port 3000, so we can serve HTTP traffic without messing with Firewalls.

We can use command to check pods status.

kubectl get pods

In order to find the IP address of our app, run this command:

$ gcloud compute forwarding-rules list
abcdef   us-central1   104.197.XXX.XXX  TCP         us-xxxx

The ideal structure

So now we have two pods for the application and one web service which contains the extrenal ip.

Now we need to setup db service for our application.

MongoDB has a concept of Replica Set

A replica set is a group of mongod instances that maintain the same data set. A replica set contains several data bearing nodes and optionally one arbiter node. Of the data bearing nodes, one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.

When a primary does not communicate with the other members of the set for more than 10 seconds, an eligible secondary will hold an election to elect itself the new primary. The first secondary to hold an election and receive a majority of the members’ votes becomes primary.

We can follow another blog to set it up

The writer create a repository to auto config the mongodb Replica Set. I forked the repository

To setup:

git clone https://github.com/thesandlord/mongo-k8s-sidecar.git
cd /mongo-k8s-sidecar/example/StatefulSet/
kubectl apply -f googlecloud_ssd.yaml
kubectl apply -f mongo-statefulset.yaml


  • Be careful about the zone difference

Rolling update

If we only need to update the container image:

kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG

Web UI

It’s better to setup a UI dashbaord for your cluster. All relevant operations can be done via that dashboard

Bind static IP to service external ip

  • create a service as usual
  • Once your app is up, make note of the External IP using
kubectl get services
  • Now go to the Google Cloud Platform Console -> Networking -> External IP Addresses.
  • Find the IP you were assigned earlier. Switch it from “Ephemeral” to “Static.” You will have to give it a name and it would be good to give it a description so you know why it is static.
  • Then modify your service (or service yaml file) to point to this static address. I’m going to modify the yaml.
apiVersion: v1
kind: Service
  name: web
    name: web
  type: LoadBalancer
    - port: 80
      targetPort: 8080
      protocol: TCP
    name: web
  • Once your yaml is modified you just need to run it; use
kubectl apply -f service.yml

Kubernetes (3)

Kubernetes (3)

Package Up The Image With Dockerfile

By following the tutorial, we need to first generate an independent image for production.

Later we can use google’s new fea ture (Build triggers) to trigger the branch update and automatically build and install staging image into the container registry.

Entire Process to Setup Application

Setup cluster via gcloud

We can setup new cluster via gcloud command or via gcloud GUI

Login to the gcloud cluster

gcloud container clusters get-credentials cluster-name --zone=asia-east1-a

Kubernetes (2)

Kubernetes (2)

Expose the Application Publicly


  |_ Deployment (Multi)  = Application (Deployed)
         |_ Node (Multi)
              |_ Pod (Multi) (Internal IP) 
                   |_ Container (Multi)
Service (External IP for public)
   |_ Pod (From different nodes) (optional)


  • the kubernets needs at least 3 nodes to enable the auto update function for the application.

Scale Your Application

Scaling is accomplished by changing the number of replicas in a Deployment

Kubernetes also supports autoscaling of Pods

Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.

Update Your Application

Rolling updates allow Deployments’ update to take place with zero downtime by incrementally updating Pods instances with new ones.

Object Management Using kubectl

There are 3 ways to manage the object

  • Imperative commands (directly via commands)
  • Imperative Management of Kubernetes Objects Using Configuration Files (use yaml files for config)
  • Declarative Management of Kubernetes Objects Using Configuration Files (using config files within the repository )

Usually we should use the third one. The basic concept is to create a config file for the project with everything prepared.

Deploy Real MEAN Stack application with Kubernetes



  • COPY within dockerfile only works for copying file from outside to inside. If we want to move files within the container, we need
RUN cp A B