A thread is a flow of execution through a process’s code. It has its own program counter, which keeps track of which instruction to execute next, system registers, which hold its current working variables, and a stack, which holds the history of previous executions. Few pieces of information, such as code segments, data segments, and open files, are shared by a thread with its peer threads. All other threads are informed when one thread modifies a memory item within a code segment. Another name for a thread is a lightweight process. Through parallelism, threads offer a way to enhance application performance.
No thread can exist outside of a process, and each thread is a part of only one specific process. A different control flow is represented by each thread. Network servers and web servers have both been implemented successfully using threads. They also offer a suitable framework for shared memory multiprocessors to run applications in parallel.
- Threads minimize the context switching time.
- Use of threads provides concurrency within a process.
- Efficient communication.
- It is more economical to create and context switch threads.
- Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
- User Level Threads − User managed threads.
- Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.
There are three common ways of establishing this relationship.
- Many-to-One Model
- One-to-One Model
- Many-to-Many Model
As seen in the video there is many to one relationship between threads. Here, multiple user threads are associated or mapped with one kernel thread. The thread management is done on the user level so it is more efficient.
As seen in the video, we can understand the one user thread to mapped to one kernel thread.
As seen in the video, we can understand the many user threads to mapped to a smaller or equal number of kernel threads. The number of kernel threads is specific to a particular application or machine.
- Resource sharing: As the threads can share the memory and resources of any process it allows any application to perform multiple activities inside the same address space.
- Utilization of Multiple Processor Architecture: The different threads can run parallel on the multiple processors hence, this enables the utilization of the processor to a large extent and efficiency.
- Reduced Context Switching Time: The threads minimize the context switching time as in Thread Context Switching, the virtual memory space remains the same.
- Economical: The allocation of memory and resources during process creation comes with a cost. As the threads can distribute resources of the process it is more economical to create context-switch threads.