Sunday, 19 February 2017

Threading Basics

generally speaking , A thread is a special object that is part of the general multitasking abilities of an OS and allows a part of the application to run independently from the execution of other objects. this commonly happens in every millisecond of our daily life. at first, we can to  take a quick glance at what threading is , some comparisons related to threading models, how they are managed and controlled and so on but not maybe all of the topics will be included in this post lol. 
as you can guess , multitasking refers to an ability of an OS which can run more than one application at a time. for instance , you can surf my website while listening to music and also running photoshop behind the browser or split your monitor and work simultaneously. if we want to talk about processes also , process means that when an application  is launched, memory and any other resource for that application are allocated. The physical separation of this memory and resources is called a process. one important note is , application and process are not the same. as you can see in the windows task manager , your system processes go under processes tab . 
Threads
You will also notice that the Task Manager has summary information about process CPU utilisation. This is because the process also has an execution sequence that is used by the computer's processor. This execution sequence is known as a thread. This thread is defined by the registers in use on the CPU, the stack used by the thread, and a container that keeps track of the thread's current state. The container mentioned in the last sentence is known as Thread Local Storage. The concepts of registers and stacks should be familiar to any of you used to dealing with low-level issues like memory allocation.  each process has at least one thread within itself. the main thread is responsible at the run time of the process . for example in most programming languages, the primary thread is started in the Static main () method. threads can access the isolated data within a process and run on the processor as required.
Time Slices
the operating system grants each application a period to execute before interrupting that application and allowing another one to execute. This is not entirely accurate. The processor actually grants time to the process. The period that the process can execute is known as a time slice or a quantum. The period of this time slice is unknown to the programmer and unpredictable to anything besides the operating system. Each operating system and each processor may have a different time allocated. You may spawn some threads in order to do some background work, such as accessing a network or querying a database or something which if you don't, your application ( main thread ) will freeze . for this sake, they are commonly known as worker threads. These threads share the process's memory space that is isolated from all the other processes on the system. The concept of spawning new threads within the same process is known as free threading.  there is something which is called apartment threading model. this model used in VB 6.0 . with this model each process was granted its own copy of the global data needed to execute. Each thread spawned was spawned within its own process, so that threads could not share data in the process's memory.this model is vastly and totally different from free-threading. each thread is taking its turn to execute, we might be reminded of that frustrating wait in line at the bank teller. However, remember that these threads are interrupted after a brief period. At that point, another thread, perhaps one in the same process, or perhaps a thread in another process, is granted execution.  you can find some columns related to threads in task manager shown in the below figure.

Interrupts and Thread Local Storage 
When one thread runs out of time in its allocated time slice, it doesn't just stop and wait its turn again. Each processor can only handle one task at a time, so the current thread has to get out of the way. However, before it jumps out of line again, it has to store the state information that will allow its execution to start again from the point it left earlier. this is a function of Thread Local Storage (TLS). The TLS for this thread,  contains the registers, stack pointers, scheduling information, address spaces in memory, and information about other resources in use. One of the registers stored in the TLS is a program counter that tells the thread which instruction to execute next. Windows knows when it needs to make a decision about thread scheduling by using interrupts. actually, an interrupt is a mechanism that causes the normally sequential execution of CPU instructions to branch elsewhere in the computer memory without the knowledge of the execution program. Windows determines how long a thread has to execute and places an instruction in the current thread's execution sequence. This period can differ from system to system and even from thread to thread on the same system. Since this interrupt is obviously placed in the instruction set, it is known as a software interrupt. This should not be confused with hardware interrupts, which occur outside the specific instructions being executed. Once the interrupt is placed, Windows then allows the thread to execute. When the thread comes to the interrupt, Windows uses a special function known as an interrupt handler to store the thread's state in the TLS. The current program counter for that thread, which was stored before the interrupt was received, is then stored in that TLS. As you may remember, this program counter is simply the address of the currently executing instruction.
Thread Sleep and Interrupt
a program may have yielded execution to another thread so it can wait on some outside resource. However, the resources may not be available the next time the thread is brought back to execute. In fact, it may not be available the next 20 times a thread is executed. The programmer may wish to take this thread out of the execution queue for a long period so that the processor doesn't waste time switching from one thread to another just to realise it has to yield execution again. When a thread voluntarily takes itself out of the execution queue for a period, it is said to sleep. When a thread is put to sleep, it is again packed up into TLS, but this time, the TLS is not placed at the end of the running queue; it is placed on a separate sleep queue. In order for threads on a sleep queue to run again, they are marked to do so with a different kind of interrupt called a clock interrupt. When a thread is put into the sleep queue, a clock interrupt is scheduled for the time when this thread should be awakened. When a clock interrupt occurs that matches the time for a thread on the sleep queue, it is moved back to the runnable queue where it will again be scheduled for execution. we will see later on our video illustration.
Thread Abort
a thread can be interrupted, and also goes to sleep. However, like all other good things in life, threads must die. Threads can be stopped explicitly as a request during the execution of another thread. When a thread is ended in this way, it is called an abort. Threads also stop when they come to the end of their execution sequence. In any case, when a thread is ended, the TLS for that thread is de-allocated. The data in the process used by that thread does not go away, however, unless the process also ends. Threads can be suspended or resumed in some cases.
Thread Priorities
as we have seen how a thread can be interrupted, sleep, aborted, there is something left in our explanation. The last thing we need to cover for the basic concept of threading is how threads prioritise themselves. Using the analogy of our own lives, we understand that some tasks we need to do take priority over other tasks.Windows prioritises threads on a scale of 0 to 31, with larger numbers meaning higher priorities. also as a programmer, you can set the priority of threads by yourself. the process also takes priority.  in  Windows, as long as threads of a higher priority exist, threads in lower priority are not scheduled for execution. The processor will schedule all threads at the highest priority first. Each thread of that same priority level will take turns executing in a round-robin fashion. After all threads in the highest priority have completed, then the threads in the next highest level will be scheduled for execution.

 References: C# Threading Handbook




1 comment:

  1. Hi dear writer!
    Did you find someone to help you writing the paper?

    ReplyDelete