I have a service that generates files and opens up subprocesses to handle it.
When the number of concurrent events is extremely high, an error will be reported.
Error: channel closed at process.target.send (internal/child_process.js:510:16) at IncomingMessage.<anonymous> (/work/www/willclass/routes/tools/esources.js:60:14) at emitNone (events.js:72:20) at IncomingMessage.emit (events.js:166:7) at endReadableNT (_stream_readable.js:913:12) at nextTickCallbackWith2Args (node.js:442:9) at process._tickCallback (node.js:356:17)
Ask the Great God for help on how to control the number of subprocesses
The main problem is that the subprocess did not shut down correctly when the main process shut down.
1. Cluster 2. Child_process
1. For the first case-Cluster
Gets the number of processes currently fork (not kill):
However, according to official documents, Cluster is actually realized by opportunity child_process.
So we focus on Child_process
2. For the second case-Child_process
There is a callback event when the main process exits. Have you kill the event in the callback event inside?
Fork’s subprocess. I have tried the main process before, but there is no way to kill it.
The call will cause the process to enter kernel state). For example, the subprocess is reading a file.
If the main process exits before it is finished or if the child process is killed, it kill fail, kernel state
The SIGKILL signal is ignored by the process of.
From my personal point of view, it is suggested to write a control on the number of subprocesses in logical inside. If subprocesses are kill
If so, the counter is decremented by one. Before exiting the program normally, make sure kill drops all the subprocess counters.
Is equal to 0. For the case of forcibly killing the master process, I think it is better to start the master again.
Before this, script should be used to handle whether there is any process in the existing process that was not kill last time.
Personal view, welcome to discuss