Hello World!

“Perfection is unattainable, but if we chase perfection we can catch excellence.” ― Vince Lombardi

View My GitHub Profile

React-Redux

What is React?

React, which is known as ReactJS or React.js, is a Javascript library for building user interfaces. As a frontend library, React dynamically updates components on its virtual Document Object Model (DOM), and reflects these changes on actual broswer’s DOM. Therefore, React is a popular choice in building single page application, where websites interact with users by dynamically updating the currect page rather than loading an entirely new page.

A common fullstack React development involves React being used as a frontend development library while Node.js as a backend development tool, and a MongoDB database. More information can be found in this post.

Why React-Redux?

Personally, I found React-Redux provides a very efficient model that differentiates the responsibilities of React and Redux clearly. React, as the frontend library, is to build the interface, while Redux is a library for passing data/state managing (Here I equate passing data to state managing because state managing normally is about updating data in a state, and passing data usually involves changes in a state). What’s left for us to do is just to plug in data into the correct component to display it on the web browser. In this way, the frontend is separated from the backend logic, and this is important when your application gets more complicated.

There are other reasons that React-Redux is preferred by many. One of which is that Redux helps where multiple compnents want to share the same data, but are not closely related to one another. Redux provides a central store that can provide data to anywhere in the application. Personally, I haven’t done any application with React alone so I am not sure how true this statement is, but it somehow makes sense to me as store is indeed powerful in Redux as it is data/state center where Redux updates the application’s state, and this update is reflected on the data displayed by React components.

Please note that for simple React app, you might not need Redux.

How React-Redux works?

I have been building a simple React-Redux web interface to simulate how each component in Redux works and how data can be passed from Redux to React components, and vice versa.

Threads, Asynchoronous Programming, Parallel Programming

This post will explain 1. briefly explain the relationship among threads, asynchoronous programming, and parallel programming; 2. when to use thread and how to use thread with a simple example from a personal experience. This may be at a very basic level but it is a good place to get the basic going before diving into a more advanced level.

So first question first, let’s begin by explaining what are concurrency and parallelism, and the different between them. In computing, we say a system is concurrent when the system can be represented by separated components and among these componenets, they are able to communicate with each other. parallelism comes in when those individual components are not being able to communicate with each other but being managed by a central control.

Asynchronous programming, which often is used with call backs, is a means of concurrent programming, because one component would not depend on another component in asychronous programming, but they are able to notify each other through call backs. Thread, in the same way, is another way to achieve asynchronous programming, with each thread being seen as a task and tasks can be implemented to communicate among themselves. (reference)

When the main process divides our tasks into multiple pieces and executes them on different CPU/Cors, we are doing parallel programming because when all pieces of the task are done, the main process would, for example, manage the data returned from each piece,and do something else (whatever it needs to do). One of the example of parallel programming is from Java parallel programming implementation.

There is a side note. In fact, there are 3 major ways to achieve concurrent programing: polling (problem: heavy procesisng), threads (problem: deadlock), and asynchronous programming.

Now let’s move on to the second point of this post, when and how to use threads. From my opinion, coming from synchronous programming background, it doesn’t make sense for me to use threads in the first place. Basically, what a thread does is to complete a task, which can be implemented by using a function/method instead. However, the catch is when we run a function/method in synchronous programming, we are not able to run another function at the same time. In asynchronous programming, we want to do task 1 while carrying on task 2 as well.

For example, I have a robot, which listens to the commands from my mobile app. When the mobile app sends the command to clean the floor to the robot, robot should still be able to listen to the app in case of the upcoming commands. In this case, in the main process of the robot, cleaning the floor and listening to the upcoming threads should be implemented as two different threads. And this cannot be done by synchronous programming because in synchronous programming, when the robot starts to clean the floor, this task has to be completed before calling another function/method.

In general, it is always good to think the behaviours of the system that we are going to build, then we can decide whether we should make use of asychronous programming or not. Below is a simple example of asychronous programming:

main.py:
robot.start("some_task", 20 * 60, set_life_time())
execute_something_else()
robot.stop()

robot1.py
def start(self, identifier, interval, life_time):
        # pass because this robot is not implemented to hancdle tasks, we do so for the sake of abstraction
        pass
 
robot2.py
def start(self, identifier, interval, life_time):
        super().start(identifier, interval, life_time)
	      self.environmental_task = Task('some_task', interval, life_time)
        self.command_task = Task('another_task', interval, life_time)
        
# thread implementation
thread.py
from datetime import datetime
from threading import Thread, Event


class Task(Thread):
    def __init__(self, task, interval, life_time):
        Thread.__init__(self)
        self.task = task
        self.interval = interval
        self.life_time = life_time
        self._is_stopped = Event()

    def run(self):
        start_time = datetime.now()

        while (datetime.now() - start_time).total_seconds() < self.life_time and not self._is_stopped:
            self.task()
            self._is_stopped.wait(self.interval)

    def stop(self):
        self._is_stopped = True

Pytorch Installation Guide

This post is to share experience on Pytorch installation on Mac OS with version 10.12, and the objective is to execute the example code with CPU on Mac.

The main issue blocked many Mac users from installing Pytorch sucessfully is that MacOS binaries do not support CUDA for Mac version lower than 10.13 (supported by CUDA version 9.2). CUDA is installed with Pytorch to enable tensors running on GPU instead of CPU only for parallel computing. To solve this issue, two methods are presented. The first one being, install Pytorch from source; the second one being, not using GPU but CPU only. This post is dedicated to the second method.

First, please make sure that you have Xcode, and either Pip3/Pip2 or Anaconda installed, which are package managers to manage packages installed available for use for a certian virtual environment. You may choose your favourite the package manager to install Pytorch but keep in mind that you need to be consistent because packages installed by Pip is not available in Conda environment.

Then, we are all set to install CUDA 9.0 which is matched with our Mac version 10.12.

Lastly, we install Pytorch package running with CPU only:

conda install pytorch torchvision -c soumith 

This command (reference) is very important because it allows Mac running with CPU only rather than trying to look for GPU which Mac 10.12 does not suppot.

Well, there is one more step away from ruuning our example code, which is to disable all GPU related code and change it to CPU enabled code. Here is the modified code that you can take a reference to, and you can run the code with parameters specified by this readme.

The following is relevent material when running with Pytorch:

A small discussion on off-by-one vulnerability

The reason I started to be interested in off-by-one vulnerability is that it is really easy to be neglected by C programmers with a beginner level, from my personal point of view; and amazingly, it can be exploited by carefully manipulating the input to the program, and breaches CIA model of information security. Therefore, it is worthwhile to bring up this vulnerability and discuss it in this blog.

This vulnerability be de demonstrated in the following code snippet:

#include <stdio.h>
#include <string.h>
void func(char *input) {
        char buffer[10];
        strcpy(buffer, input);
}

int main(int argc, char *argv[]) {
        func("AAAAAAAAAA"); // 10 As
        return 0;
}

If we observe the piece of code above carefully, we will be able to notice that strcpy(buffer, input) is where the problem lies. To see it clearly, we, as programmers, have to be positive about the following 2 points:

Interestingly, the symptom of this bug varies with different machines executing this piece of code. For example, I have run this same piece of code on macOs 10.12.6 and ubuntu 16.04.5 respectively, and received different results. On macOs 10.12.6, after running the executable file, we can see Abort trap: 6 exception, while on ubuntu 16.04.5, there is no exception shown after running the executable file of the source code. Obviously, the Mac is getting more aggressive with its overflow checking. It also tells us that it is programmer’s primary responsibility to ensure the correctness of the code in term of the syntax, logic, also safety.

You may think that this is just a null byte overflowing the destination buffer, and it does not bring any negative impact to the program, as seen from the result on ubuntu 16.04.5, i.e. no exceptions are shown. If so, we may need to think twice, and we would realize the seriousness of the overflowing one byte, if it occurs to us that the caller’s EBP may be overwritten if it is located just above the destination buffer, which is shown below:

buffer
saved EBP
saved EIP

If working with little-endian format, we will overwrite the least significant byte of the saved EBP with a null byte, so the saved EBP is corrupted. After the leave and ret instructions of main’s epilogue, the ESP register will be moved to point to the corrupted EBP, and then, the value where ESP is pointing will be popped to EBP. Afterwards, ESP will be pointing to the address corrupted EBP -4, and after ret instruction is executed, it will set the EIP register with the value of the memory address where ESP is pointing to (corrupted EBP -4: EIP). Thus, to exploit this vulnerability by loading a shellcode to the buffer, we should set the address of the shellcode in the corrupted EBP – 4 address.

Finally, from the above discussion, we can learn that even though it is just one byte overflowing from the destination buffer, it is still can be exploited to change the execution flow of the program, which may lead to serious security problem, e.g. escalated rights. Also, it reminds us to understand the description of C functions and use them with the right understanding.

References

My friend

There is that person who told me "if you cannot explain it simply, you don’t understand it well enough". I know that it’s from Einstein, I am still touched deeply.

Let’s call this person, my friend.

I appreciated how much I have learned from my friend. I have to say that my friend is a brilliant person who I could rarely find in my social circle. I think, one of the reasons that attributes my friend’s smartness is the ability to learn after graduation from our university. How do I know that? One simple evidence is a shared list of books my friend shared to me before, which records various books my friend has read and the date that a particular book finished. The books on the list are sorted by categories, finance, philosophy, etc. I was amazed when I first looked at that list. How could a person record down all the books one has read before? It takes not just the drive to read, to learn, but also consistency, organisation, and persistency. These abilities, shown in the big things, but also, and often, seen in small details too.

There is that discipline in my friend that can drive one to keep going on over the years. We have know each other almost 3 years, not long, but each year when we chat I know that my friend is keeping learning, accumulating more achievements. Of course, adversaries hit sometimes, but keep going. I often, or probably most people often, wonder whether people with strong self-discipline, do they ever secretly play or how they can keep studying for such a long time. Well, I guess the answer is that they do rest or “play” in a different way. They read different books for a change of mind. Living in this digitalised world, we are often drown in the net of the social media. Information comes into our eyes fast and leaves quickly. We are used to be fed with online fast-food style information. Therefore the ability to imagine, to think which require when we read a book takes efforts. Gradually, it is harder for us to pick up a book to read than to hold a mobile phone to browse. By keeping a habit of reading, my friend is able to learn different things all the time.

Have I just mentioned that my friend is very organised as well? Not just about the book list. One of the things that surprised me most is that how the knowledge that my friend is going to learn or have learned is organised into categories. A vision therefore is formed. All that my friend have learned falls down into a particular category, so clear. Organization brings clarity, always.

"Perfection is unattainable, but if we chase perfection we can catch excellence." Vince said. I understand that my friend is intelligent, cultivated, and gentle to people, but I know as well that my friend is not perfect. I guess, what my friend touches me the most is that we have a shared mindset of chasing. It is a way to suffering, but "suffering is optional, pain is inevitable".

If you are reading this blog, my friend, thank you for being who you are and inspired me.

Cheers.