Skip to main content

Operating System- Process and Thread Review

回顾总结
process and thread

process是操作系统调度的基本单位,一个程序可以理解为一个进程。
thread是process的子process,是process调度的基本单位。
  从调度角度理解process和thread
  我们知道操作系统调度的时候是以process为单位的,即当我们的程序是单thread的process时,那在这个process获得cpu的时间片时,这个thread能够获得全部的这个时间片。在内核中,我们维护着一个process的table,来记录每个process的基本信息,根据不同的调度方式来使用这个信息或者不使用。
  当一个process拥有多个thread时,如何调度thread呢?

  •  当thread为内核态thread时。
  •  当thread为用户态thread时。
  • 混合实现不做讨论

  两种情况则有不同的调度方式。当为内核态的thread时,即此时的thread可以称之为轻量级的process,这个时候内核来维护一个thread table和process table相似。同时操作系统来调度thread,就像调度process一样。当为用户态的线程时,那么操作系统感受不到多个thread的存在,因为此时操作系统调度的对象是process,一个process中有多少个thread他不关心,所以当一个process得到CPU时间片后,他会自己来调度thread。
  了解他们两个基本工作方式后,应该清楚这两种线程的主要区别:调度者是谁?各个的优缺点是什么?在JAVA中使用的哪种thread?Golang中goroutine是如何对应到我们操作系统的调度单位上去的?

  • 调度者。 内核态的thread:内核。 用户态的thread:process
  • 内核态thread的优点:在单个thread进行syscall而阻塞 时(一个线程不可以即运行代码,又能进行syscall),操作系统可以切换到同process中或者其他process下一个可以执行的thread,而不必阻塞,然而用户态的thread在进行syscall时,整个process下的thread都会阻塞。用户态的thread优点是在进行thread切换时消耗非常小,因为他不需要像内核态的thread那样需要陷入一次内核。
  • Java中根据不同JVM而视情况决定,一般来说是1:1的模型,即JVM中的线程和操作系统的thread一一对应。理解这一点我们可以清楚,java的thread也不是可以无限制的增长(内核中thread table的大小是有限制的),而且创建一个thread的消耗也很大。
  • Gorouting中的有一个参数GOMAXPROCS (The GOMAXPROCS variable limits the number of operating system threads that can execute user-level Go code simultaneously)   也就是这些goroutine是对于thread是N:1的模型,因为goroutine是由一个叫做context的结构来管理,context挂载一个thread,一次次来把当前可运行的goroutine压栈进行运行。当一个goroutine进行syscall时,当前context就只保留这么一个goroutine,scheduler来保证有下一个或者说有足够context来执行其他goroutine。详见:go-scheduler 



Comments

Popular posts from this blog

Elasticsearch error when the field exceed limit 32kb

When we post data that the field exceed the limit, elasticsearch will reject the data which error: {"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[cs19-2][10.200.20.39:9301][indices:data/write/index]"}],"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"field1\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is You can handle this: 1. you can udpate the mapping part at any time PUT INDEXNAME/_mapping/TYPENAME { "properties" : { "logInfo" : { "type" : "string" , "analyzer" : "keyword" , "ignore_above" : 32766 } } } ignore_above means that we will only keep 32766 bytes 2...

Scrapy ERROR :ImportError: Error loading object 'scrapy.telnet.TelnetConsole': No module named conch

原文: https://stackoverflow.com/questions/17263509/error-while-using-scrapy-scrapy-telnet-telnetconsole-no-module-named-conch/17264705#17264705 On Ubuntu, you should avoid using  easy_install  wherever you can. Instead, you should be using  apt-get ,  aptitude , "Ubuntu Software Center", or another of the distribution-provided tools. For example, this single command is all you need to install scrapy - along with every one of its dependencies that is not already installed: $ sudo apt - get install python - scrapy easy_install  is not nearly as good at installing things as  apt-get . Chances are the reason you can't get it to work is that it didn't quite install things sensibly, particularly with respect to what was already installed on the system. Sadly, it also leaves no record of what it did, so uninstallation is difficult or impossible. You may now have a big mess on your system that prevents proper installations from working as well (or maybe not...

Golang http server performance tuning practice (Golang HTTP服务器性能调优实践)

  Golang 1.8后本身自带了pprof的这个神器,可以帮助我们很方便的对服务做一个比较全面的profile。对于一个Golang的程序,可以从多个方面进行profile,比如memory和CPU两个最基本的指标,也是我们最关注的,同时对特有的goroutine也有运行状况profile。关于golang profiling本身就不做过多的说明,可以从 官方博客 中了解到详细的过程。   Profile的环境 Ubuntu 14.04.4 LTS (GNU/Linux 3.19.0-25-generic x86_64) go version go1.9.2 linux/amd64  profile的为一个elassticsearch数据导入接口,承担接受上游数据,根据元数据信息写入相应的es索引中。目前的状况是平均每秒是1.3Million的Doc数量。   在增加了profile后,从CPU层面发现几个问题。 runtime mallocgc 占用了17.96%的CPU。 SVG部分图如下 通过SVG图,可以看到调用链为: ioutil.ReadAll -> buffer.ReadFrom -> makeSlice -> malloc.go  然后进入ReadAll的源码。 readAll()方法 func readAll(r io.Reader, capacity int64) (b []byte, err error) { buf := bytes . NewBuffer ( make ([] byte , 0 , capacity )) // If the buffer overflows, we will get bytes.ErrTooLarge. // Return that as an error. Any other panic remains.   defer func() { e := recover () if e == nil { return } if panicErr , ok := e .( error ) ; ok && p...