Webpack optimization

Webpack optimization

To view all document pages:Full stack developmentFor more information.

Original link:Webpack optimization, the original advertising mode box block, reading experience is not good, so organize the text, easy to find.

Optimize development experience

  1. Optimizing Construction Speed. When the project is huge, the construction time may become very long, and the time spent waiting for the construction will add up to a large amount.

    • Narrow file search
    • Use DllPlugin
    • Use HappyPack
    • Use ParallelUglifyPlugin
  2. Optimize the use experience. Through automated means to complete some repetitive work, let’s focus on solving the problem itself.

    • Use automatic refresh
    • Turn on module thermal replacement.

Optimize output quality

The purpose of optimizing the output quality is to present users with web pages with better experience, such as reducing the loading time of the first screen and improving the smoothness of performance. This is very important, because in today’s increasingly competitive Internet industry, it may be related to the life and death of your products.

The essence of optimizing output quality is to optimize the code to be published online to build output, which is divided into the following points:

  1. Reduce the load time perceived by users, that is, the first screen loading time.

    • Distinguishing environment
    • Compressed code
    • CDN acceleration
    • Using Tree Shaking
    • Extract common code
    • Load on demand
  2. Improve fluency, that is, to improve code performance.

    • Use Prepack
    • Open scopehosting

Narrow file search

After the Webpack is started, the import statement in the file will be parsed from the configured Entry, and then recursively parsed. The Webpack does two things when it encounters an import statement:

  1. Find the corresponding file to import according to the import statement. For examplerequire('react')The file corresponding to the import statement is./node_modules/react/react.js,require('./util')The corresponding file is./util.js.
  2. According to the found suffix of the file to be imported, use Loader in the configuration to process the file. For example, JavaScript files developed using ES6 need to be usedbabel-loaderTo deal with.


Because Loader’s conversion of files is time consuming, it is necessary to make as few files as possible be processed by Loader.

It is described in the Module that when using Loader, you can usetestincludeexcludeThree configuration items to hit the file to which Loader applies the rule. In order to make the file be processed by Loader as little as possible, you can useincludeTo hit only those files that need to be processed.

Take the project using ES6 as an example, in the configurationbabel-loader, can be like this:

module.exports = {
  module: {
    rules: [
        // 如果项目源码中只有 js 文件就不要写成 /\.jsx?$/,提升正则表达式性能
        test: /\.js$/,
        // babel-loader 支持缓存转换出的结果,通过 cacheDirectory 选项开启
        use: ['babel-loader?cacheDirectory'],
        // 只对项目根目录下的 src 目录中的文件采用 babel-loader
        include: path.resolve(__dirname, 'src'),

You can adjust the directory structure of the project appropriately to facilitate the loading process.includeTo narrow the hit range.


Introduced in Resolveresolve.modulesUsed to configure which directories the Webpack searches for third-party modules.

resolve.modulesThe default value for is['node_modules'], meaning go to the current directory first./node_modulesDirectory down to find the module you want to find, if you can’t find it, go to the next level directory.../node_modulesIf you don’t find one, you’ll go.../../node_modulesThis is very similar to the module searching mechanism of Node.js

When the installed third-party modules are placed under the root directory of the project./node_modulesUnder the directory, it is not necessary to search layer by layer according to the default method. The absolute path for storing the third-party modules can be indicated to reduce searching. The configuration is as follows:

module.exports = {
  resolve: {
    // 使用绝对路径指明第三方模块存放的位置,以减少搜索步骤
    // 其中 __dirname 表示当前工作目录,也就是项目根目录
    modules: [path.resolve(__dirname, 'node_modules')]


Introduced in Resolveresolve.mainFieldsUsed to configure which portal file the third-party module uses.

One of the installed third-party modulespackage.jsonThe file is used to describe the attributes of this module, and some fields are used to describe where the entry file is.resolve.mainFieldsUsed to configure which field to use as the description of the portal file.

The reason why there can be multiple fields to describe the entry file is that some modules can be used in multiple environments at the same time, and different codes are required for different operating environments. In order toisomorphic-fetchFor example, it is an implementation of fetch API, but it can be used in both browser and Node.js environment. It’spackage.jsonThere are 2 entry file description fields in:

  "browser": "fetch-npm-browserify.js",
  "main": "fetch-npm-node.js"

isomorphic-fetchThe reason why different codes are used in different running environments is that the implementation mechanism of fetch API is different, and the original code is used in the browser.fetchOr ..XMLHttpRequestImplemented in Node.jshttpModule implementation.

resolve.mainFieldsThe default value of the and the currenttargetConfiguration is related, and the corresponding relationship is as follows:

  • WhentargetForwebOr ..webworkerWhen, the value is["browser", "module", "main"]
  • WhentargetFor other cases, the value is["module", "main"]

In order totargetFor example, if it is equal to the web, the Webpack will first use thebrowserField to find the module’s entry file, if there is no usemoduleField, and so on.

In order to reduce the search steps, when you specify the entry file description field of the third-party module, you can set it as few as possible. Since most third-party modules are adoptedmainField to describe the location of the entry file, you can configure the Webpack as follows:

module.exports = {
  resolve: {
    // 只采用 main 字段作为入口文件描述字段,以减少搜索步骤
    mainFields: ['main'],

When using this method for optimization, you need to take into account the entry file description fields of all the third-party modules on which you depend at runtime. Even if one module is mistaken, it may cause the built code to fail to run normally.


resolve.aliasConfiguration items map the original import path to a new import path through aliases.

In actual combat projects, we often rely on some huge third-party modules. Take React library as an example to installnode_modulesThe directory structure of React library under the directory is as follows:

├── dist
│   ├── react.js
│   └── react.min.js
├── lib
│   ... 还有几十个文件被忽略
│   ├── LinkedStateMixin.js
│   ├── createClass.js
│   └── React.js
├── package.json
└── react.js

You can see that the React library released contains two sets of codes:

  • One is modular code using the CommonJS specification. These files are all placed in thelibDirectory topackage.jsonThe entry file specified inreact.jsIs the entrance to the module.
  • One is to package all React related codes into a single file. These codes can be directly executed without modularization. among themdist/react.jsIt is used in the development environment and contains code for checking and warning.dist/react.min.jsIt is used in online environment and is minimized.

By default, the Webpack will be downloaded from the portal file./node_modules/react/react.jsIt is a time-consuming operation to start recursively parsing and processing dozens of dependent files. Through configurationresolve.aliasYou can let the Webpack directly use a separate and complete when dealing with React librariesreact.min.jsFile, thus skipping the time-consuming recursive parsing operation.

The relevant Webpack configurations are as follows:

module.exports = {
  resolve: {
    // 使用 alias 把导入 react 的语句换成直接使用单独完整的 react.min.js 文件,
    // 减少耗时的递归解析操作
    alias: {
      'react': path.resolve(__dirname, './node_modules/react/dist/react.min.js'),

In addition to React libraries, most libraries will contain packaged complete files when they are released into Npm warehouses, and you can configure these libraries as well.alias.

However, the use of this optimization method for some libraries will affect the use to be discussed later.Tree-ShakingOptimization to eliminate invalid code, because there is some code in the packaged complete file, your project may never be used. Generally, this method is used to optimize libraries with strong integrity, because the code in the complete file is a whole and every line is indispensable. But for libraries of some tool classes, for examplelodash, your project may only use a few of the tool functions, you cannot use this method to optimize, because it will cause your output code to contain a lot of code that will never be executed.


When the import statement does not have a file suffix, the Webpack will automatically take the suffix and try to ask if the file exists.resolve.extensionsUsed to configure the suffix list used in the attempt. The default is:

extensions: ['.js', '.json']

That is to say, when meetingrequire('./data')When importing such statements, the Webpack will look for them first../data.jsFile, if the file does not exist, look for it../data.jsonFile, if still can’t find it, report the error.

If this list is longer or the correct suffix is later, it will result in more attempts, soresolve.extensionsThe configuration of also affects the performance of the build. In configurationresolve.extensionsWhen you need to comply with the following points, in order to optimize the construction performance as much as possible:

  • The suffix attempt list should be as small as possible, and do not write the impossible situations in the project into the suffix attempt list.
  • The file suffix with the highest frequency should be given priority in order to exit the search process as soon as possible.
  • When writing import statements in source code, suffix should be used as much as possible to avoid the searching process. For example, if you’re sure, put theRequire('./data') is written as require('./data.json').

The relevant Webpack configurations are as follows:

module.exports = {
  resolve: {
    // 尽可能的减少后缀尝试的可能性
    extensions: ['js'],


module.noParseConfiguration items enable the Webpack to ignore recursive parsing of some files that do not use modularity, which has the advantage of improving construction performance. The reason is that some libraries, such as jQuery and ChartJS, are large and do not adopt modular standards, making it time-consuming and meaningless for Webpack to parse these files.

In the above optimizationresolve.aliasThe configuration refers to a single completereact.min.jsThe files are not modularized, so let’s configure them.module.noParseIgnore the rightreact.min.jsRecursive parsing of files. The relevant Webpack configurations are as follows:

const path = require('path');

module.exports = {
  module: {
    // 独完整的 `react.min.js` 文件就没有采用模块化,忽略对 `react.min.js` 文件的递归解析处理
    noParse: [/react\.min\.js$/],

Note that neglected files should not containimportrequiredefineSuch as modular statements, otherwise it will cause the built code to contain modular statements that cannot be executed in the browser environment.

The above is the optimization of all the construction performance related to narrowing the file search scope. After the above methods are modified according to the needs of your own project, your construction speed will definitely be improved.

Use DllPlugin

To construct the idea of accessing dynamic link library for Web projects, the following things need to be completed:

  • The basic modules that the web page depends on are separated and packaged into separate dynamic link libraries. A dynamic link library can contain multiple modules.
  • When the module to be imported exists in a dynamic link library, the module cannot be packaged again, but is retrieved from the dynamic link library.
  • When the module to be imported exists in a dynamic link library, the module cannot be packaged again, but is retrieved from the dynamic link library.

Why does the idea of accessing dynamic link libraries to Web projects greatly improve the construction speed? The reason is that the dynamic link library containing a large number of reusable modules only needs to be compiled once, and the modules contained in the dynamic link library will not be recompiled in the subsequent construction process, but directly use the code in the dynamic link library. Because most dynamic link libraries contain commonly used third-party modules, such asreactreact-domAs long as the versions of these modules are not upgraded, the dynamic link library will not need to be recompiled.

Access Webpack

The Webpack already has built-in support for dynamic link libraries, which need to be accessed through 2 built-in plug-ins, which are:

  • DllPlugin plug-in: used to package individual dynamic link library files.
  • DllReferencePlugin plug-in: it is used to introduce the dynamic link library file packaged by DllPlugin plug-in into the main configuration file.

Let’s take the basic React project as an example to access DllPlugin. Before starting, let’s look at the directory structure finally built:

├── main.js
├── polyfill.dll.js
├── polyfill.manifest.json
├── react.dll.js
└── react.manifest.json

It contains two dynamic link library files, namely:

  • polyfill.dll.jsIt contains all the dependencies of the projectpolyfill, such as apis such as Promise, fetch, etc.
  • react.dll.jsIt contains React’s basic operating environment, that isreactAndreact-domModules.

In order toreact.dll.jsFor example, the contents of the document are roughly as follows:

var _dll_react = (function(modules) {
  // ... 此处省略 webpackBootstrap 函数代码
  function(module, exports, __webpack_require__) {
    // 模块 ID 为 0 的模块对应的代码
  function(module, exports, __webpack_require__) {
    // 模块 ID 为 1 的模块对应的代码
  // ... 此处省略剩下的模块对应的代码 

It can be seen that a dynamic link library file contains a large number of module codes. These modules are stored in an array with the index number of the array as the ID. And also through_dll_reactVariables expose themselves to the global, that is, they can passwindow._dll_reactYou can access the modules it contains.

among thempolyfill.manifest.jsonAndreact.manifest.jsonThe file is also generated by DllPlugin to describe which modules are included in the dynamic link library file toreact.manifest.jsonFor example, the contents of the document are roughly as follows:

<p data-height=”565″ data-theme-id=”0″ data-slug-hash=”GdVvmZ” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”react.manifest.json” class=”codepen”>See the Penreact.manifest.jsonby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

Visiblemanifest.jsonThe document clearly describes the correspondingdll.jsWhich modules are included in the file, and the path and ID of each module.

main.jsThe file is a compiled execution entry file, when it encounters the module it depends ondll.jsIn the file, will directly throughdll.jsThe global variables exposed by the file are retrieved and packaged in thedll.jsThe module of the file. So inindex.htmlThe file needs to include the twodll.jsFile to load in,index.htmlThe content is as follows:

  <meta charset="UTF-8">
<div id="app"></div>
<script src="./dist/polyfill.dll.js"></script>
<script src="./dist/react.dll.js"></script>
<script src="./dist/main.js"></script>

The above is all the code compiled after accessing DllPlugin. Next, I’ll show you how to implement it.

Build a dynamic link library file

Build the following four files for the output:

├── polyfill.dll.js
├── polyfill.manifest.json
├── react.dll.js
└── react.manifest.json

And the following document:

├── main.js

It is output by two different constructs.

Files related to dynamic link library files need to be output from an independent build for use by the main build. Create a new Webpack configuration file.webpack_dll.config.jsSpecifically used to build them, the file content is as follows:

<p data-height=”665″ data-theme-id=”0″ data-slug-hash=”MGNvrB” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”webpack_dll.config.js” class=”codepen”>See the Penwebpack_dll.config.jsby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

Using Dynamic Link Library Files

The dynamic link library file constructed is used for other places, here it is also used for the execution portal.

For outputmain.jsThe main Webpack configuration file for is as follows:

<p data-height=”720″ data-theme-id=”0″ data-slug-hash=”GdVvxj” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”main.js” class=”codepen”>See the Penmain.jsby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

Note: atwebpack_dll.config.jsFile, DllPluginnameParameters must be andoutput.libraryTo keep the same. The reason is that thenameParameters affect the outputmanifest.jsonIn the filenameThe value of the field, while in thewebpack.config.jsDllReferencePlugin in the file will gomanifest.jsonFile readingnameThe value of the field, taking the content of the value as the global variable name when obtaining the content in the dynamic link library from the global variable.

Execute build

After the above two Webpack configuration files have been modified, the build needs to be re-executed. When re-executing the construction, it should be noted that files related to dynamic link libraries need to be compiled first, because DllReferencePlugin defined in the main Webpack configuration file depends on these files.

The process for executing the build is as follows:

  1. If files related to dynamic link libraries have not been compiled, they need to be compiled first. The method is to executewebpack --config webpack_dll.config.jsOrders.
  2. Only when the dynamic link library exists can the import and export execution files be compiled normally. The method is to execute the webpack command. At this time, you will find that the construction speed has been greatly improved.

Use HappyPack

Because there are a large number of files to be analyzed and processed, construction is a file read-write and compute-intensive operation, especially when the number of files becomes more, the problem of slow Webpack construction becomes serious. The Webpack running on Node.js is a single-threaded model, which means that the tasks that the Webpack needs to deal with need to be done one by one and cannot be done together.

File reading and writing and computing operations are inevitable. Can Webpack handle multiple tasks at the same time and exert the power of multi-core CPU computers to improve the construction speed?

HappyPack can make Webpack do this. It decomposes tasks into multiple subprocesses to execute concurrently, and sends the results to the main process after the subprocesses have processed them.

Because JavaScript is a single-threaded model, in order to exert the capability of multi-core CPU, it can only be realized through multi-processes, not through multi-threads.

Disassembling tasks and managing threads will help you do a good job. All you need to do is plug in the HappyPack. Relevant codes for accessing HappyPack are as follows:

<p data-height=”665″ data-theme-id=”0″ data-slug-hash=”RyXLEy” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”HappyPack ” class=”codepen”>See the PenHappyPackby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

The above code has two important modifications:

  • In the Loader configuration, all files are handled by thehappypack/loaderTo deal with, use followed byquerystring ? id=babelGo tellhappypack/loaderTo select which HappyPack instance to process the file.
  • In Plugin configuration, two HappyPack instances were added to tellhappypack/loaderHow to deal with it.jsAnd.cssDocuments. Of the optionsidThe value of the property and the abovequerystringhit the target? id=babelCorrespondingly, in the optionsloadersProperty is the same as in Loader configuration.

When instantiating the HappyPack plug-in, in addition to passing inidAndloadersIn addition to the two parameters, HappyPack also supports the following parameters:

  • threadsOn behalf of the open several subprocesses to deal with this type of file, the default is3, type must be integer.
  • verboseWhether to allow HappyPack to output logs, the default istrue.
  • threadPoolRepresents a shared process pool, i.e. multiple HappyPack instances use subprocesses in the same shared process pool to process tasks to prevent excessive resource usage. The relevant code is as follows:

<p data-height=”465″ data-theme-id=”0″ data-slug-hash=”MGNERw” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”threadPool ” class=”codepen”>See the PenthreadPoolby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

After accessing the HappyPack, you need to install new dependencies for the project:

npm i -D happypack

HappyPack principle

In the whole Webpack construction process, the most time-consuming process is probably Loader’s conversion of files, because there is a huge amount of file data to be converted, and these conversion operations can only be processed one by one. The core principle of HappyPack is to decompose this part of tasks into multiple processes for parallel processing, thus reducing the total construction time.

From the previous use, we can see that all the files that need to be processed by Loader have been delivered first.happypack/loaderTo deal with, after collecting the processing rights of these documents, HappyPack will be distributed uniformly.

Every passnew HappyPack()Instantiating a HappyPack is actually telling the HappyPack core scheduler how to convert a class of files through a series of Loader and specifying how to assign subprocesses to such conversion operations.

The logic code of the core scheduler is in the main process, that is, the process running the Webpack. the core scheduler will assign tasks to the currently idle subprocesses one by one, and send the results to the core scheduler after the subprocesses are processed. the data exchange between them is realized through the interprocess communication API.

The core scheduler will notify the Webpack that the file is processed after receiving the processed result from the subprocess.

Use ParallelUglifyPlugin

When building code for publishing online using Webpack, there is a process of compressing the code. The most common JavaScript code compression tool is UglifyJS, which is also built into the Webpack.

After using UglifyJS, you will surely find that it will be completed soon when building the code for the development environment. However, when building the code for the online application, the building has been stuck at a point in time and has not responded. In fact, the code compression is in progress when it is stuck.

Because compressing JavaScript code requires parsing the code into AST syntax tree represented by Object abstraction, and then applying various rules to analyze and process AST, this process is computationally expensive and time-consuming.

Why not introduce the idea of multi-process parallel processing introduced in using HappyPack into code compression?

ParallelUglifyPlugin did this. When a Webpack has multiple JavaScript files that need to be output and compressed, UglifyJS will be used to output them one by one. However, ParallelUglifyPlugin will open multiple subprocesses and assign the compression of multiple files to multiple subprocesses to complete. Each subprocess actually compresses the code through UglifyJS, but it is executed in parallel. Therefore, ParallelUglifyPlugin can complete the compression of multiple files faster.

It is also very simple to use ParallelUGLIFYPLIGIN. After the UglifyJsPlugin built in the original Webpack configuration file is removed, it is replaced by ParallelUGLIFYPLIGIN. The relevant code is as follows:

<p data-height=”585″ data-theme-id=”0″ data-slug-hash=”BxXwgM” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”ParallelUglifyPlugin” class=”codepen”>See the PenParallelUglifyPluginby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

The following parameters are supported when instantiating through newparalleluglifetype ():

  • test: Use regular to match which files need to be compressed by ParallelUglifyPlugin. The default is/.js$/, which is the default compression of all.jsDocuments.
  • include: Use regular to hit files that need to be compressed by ParallelUglifyPlugin. The default is[].
  • exclude: Use regular to hit files that do not need to be compressed by ParallelUglifyPlugin. The default is[].
  • cacheDir: cache the compressed result, and directly obtain the compressed result from the cache and return it the next time the same input is encountered. CacheDir is used to configure the directory path where the cache resides. The cache will not be started by default. Please set a directory path to open the cache.
  • workerCount: Open several subprocesses to execute compression concurrently. The default is the CPU core of the currently running computer minus 1.
  • sourceMap: whether to output Source Map, which will slow down the compression process.
  • uglifyJS: Configuration for compressing ES5 code, Object type, parameter passed directly to UglifyJS.
  • uglifyES: Configuration for compressing ES6 code, Object type, parameter passed directly to UglifyES.

thereintestincludeexcludeSame idea and usage as when configuring Loader.

UglifyES is a variant of UglifyJS, which is specially used to compress ES6 code. Both of them come from the same project, and both of them cannot be used at the same time.

UglifyES is generally used to compress code for relatively new JavaScript operating environment. For example, code used for ReactNative runs in JavaScriptCore engine with good compatibility. In order to obtain better performance and size, UglifyES compression effect will be better.

ParallelUGLIFYLUGIN has both UglifyJS and UglifyES built in, which means ParallelUGLIFYLUGIN supports parallel compression of ES6 code.

After accessing ParallelUglifyPlugin, the project needs to install new dependencies:

npm i -D webpack-parallel-uglify-plugin

After the installation is successful, re-execute the build and you will find the speed is much faster. If cacheDir is set to turn on caching, it will become faster in later builds.

Use automatic refresh

In the development phase, modifying the source code is an inevitable operation. For the development of web pages, to see the modified effect, it is necessary to refresh the browser and let it run the latest code again. Although this is much more convenient than developing native iOS and Android applications because it requires recompiling the project and running it again, we can optimize the experience better. By means of automation, we can hand over these repeated operations to the code to help us finish them. When listening to changes in the local source code file, we can automatically rebuild the executable code and then control the browser refresh.

The Webpack has built-in these functions and offers a variety of options.

File snooping

File monitoring is to automatically reconstruct a new output file when changes are found in the source code file.

The Webpack officially provides two modules, one is the core webpack and the other is mentioned in DevServerwebpack-dev-serverExtension module. The file monitoring function is provided by the webpack module.

InOther configuration itemsAs mentioned in, the configuration items related to Webpack support file monitoring are as follows:

module.export = {
  // 只有在开启监听模式时,watchOptions 才有意义
  // 默认为 false,也就是不开启
  watch: true,
  // 监听模式运行时的参数
  // 在开启监听模式时,才有意义
  watchOptions: {
    // 不监听的文件或文件夹,支持正则匹配
    // 默认为空
    ignored: /node_modules/,
    // 监听到变化发生后会等300ms再去执行动作,防止文件更新太快导致重新编译频率太高
    // 默认为 300ms
    aggregateTimeout: 300,
    // 判断文件是否发生变化是通过不停的去询问系统指定文件有没有变化实现的
    // 默认每秒问 1000 次
    poll: 1000

There are two ways for the Webpack to turn on listening mode:

  • In the configuration filewebpack.config.jsMedium settingwatch: true.
  • When executing the start Webpack command, bring--watchParameters, the complete command iswebpack --watch.

Working Principle of File Monitoring

The principle of monitoring changes in a file in a Webpack is to acquire the last editing time of the file at regular intervals and save the latest last editing time each time. If it is found that the currently acquired and last saved last editing time are inconsistent, the file is deemed to have changed. In the configuration itemwatchOptions.pollIt is used to control the period of regular inspection, which means how many times per second.

When a file is found to have changed, it will not immediately tell the listener, but will cache it first, collect the changes for a period of time, and then tell the listener at one time. In the configuration itemwatchOptions.aggregateTimeoutIs used to configure this wait time. The purpose of doing this is because in the process of editing the code, we may input text with high frequency, which may lead to high frequency of file changes. If we re-execute the construction every time, the construction will be stuck.

For multiple files, the principle is similar, except that each file in the list is checked regularly. But how is this list of files that need to be monitored determined? By default, the Webpack recursively parses the files that the Entry file depends on starting from the configured Entry file, and adds all the dependent files to the listening list. It can be seen that Webpack is still very intelligent in doing this, instead of roughly directly monitoring all files in the project directory.

Since the path to save the file and the last editing time need to take up memory, and the regular check cycle check needs to take up CPU and file I/O, it is better to reduce the number of files that need to be monitored and the check frequency.

Optimizing File Monitoring Performance

After understanding the working principle of file monitoring, it is good to analyze how to optimize the file monitoring performance.

When listening mode is turned on, the configured Entry file and all its recursively dependent files will be monitored by default. Many of these documents will exist innode_modulesBecause today’s Web projects rely on a large number of third-party modules. In most cases, it is impossible for us to editnode_modulesUnder the file, but edit their own source files. So a big optimization point is to ignore itnode_modulesUnder the file, don’t listen to them. The relevant configuration is as follows:

module.export = {
  watchOptions: {
    // 不监听的 node_modules 目录下的文件
    ignored: /node_modules/,

After optimization by this method, the memory and CPU consumed by your Webpack will be greatly reduced.

Sometimes you might thinknode_modulesThe third party modules in the directory arebug, want to modify the file of the third party module, and then try in your own project. In this case, if the above optimization method is used, we need to restart the construction to see the latest results. But this kind of situation is very rare after all.

In addition to ignoring the optimization of some files, there are the following two methods:

  • watchOptions.aggregateTimeoutThe higher the value, the better the performance, because this can reduce the frequency of rebuilding.
  • watchOptions.pollThe smaller the value, the better, because it can reduce the frequency of inspection.

However, the consequence of the two optimization methods is that you will feel that the response and sensitivity of the monitoring mode are reduced.

Automatically refresh browser

The next step after monitoring the file update is to refresh the browser.webpackThe module is responsible for monitoring files,webpack-dev-serverThe module is responsible for refreshing the browser. In usewebpack-dev-serverModule to startwebpackModule,webpackThe module’s listening mode will be turned on by default.webpackThe module will tell when the file changeswebpack-dev-serverModules.

Principle of automatic refresh

There are three ways to control browser refresh:

  1. With the help of browser extension to refresh the interface provided by the browser, WebStorm IDE’s LiveEdit function is realized in this way.
  2. Inject proxy client code into the webpage to be developed, and refresh the whole page through proxy client.
  3. Put the web page to be developed into oneiframeBy refreshingiframeTo see the latest results.

DevServer supports the 2nd and 3rd methods, and the 2nd is the refresh method adopted by DevServer by default.

Optimize the performance of automatic refresh

It was introduced in DevServerdevServer.inlineConfiguration items, which are used to control whether to inject proxy clients into Chunk, will be injected by default. In fact, in the openinlineWhen, DevServer will inject proxy client code into each output Chunk. when your project needs to output many chunks, this will cause your construction to be slow. In fact, to complete automatic refresh, only one proxy client is required for a page. DevServer rudely injects each Chunk because it does not know which Chunks a page depends on, and simply injects all of them into one proxy client. As long as the webpage depends on any Chunk, the proxy client is injected into the webpage.

The idea of optimization here is that closing is not elegant enough.inlineMode, injecting only one proxy client. To closeinlineMode, when starting DevServer, you can execute the commandwebpack-dev-server --inline false(It can also be set in the configuration file).

The web page to be developed is put into oneiframeAfter editing the source code,iframeIt will be refreshed automatically. At the same time, you will find that the construction time starts from1566msReduced to1130ms, indicating that the optimization has taken effect. The effect of building performance improvement will become more prominent when the number of Chunk to be output increases.

After you shut it downinlineAfter that, DevServer will automatically prompt you to go through the new web address.http://localhost:8080/webpack-dev-server/To visit, this is very popular.

If you don’t want to passiframeHowever, in order to maintain the automatic refresh function of the web page, you need to manually inject proxy client script into the web pageindex.htmlInsert the following label into:

<!--注入 DevServer 提供的代理客户端脚本,这个服务是 DevServer 内置的-->
<script src="http://localhost:8080/webpack-dev-server.js"></script>

After the above script is injected into the webpage, the independently opened webpage can be refreshed automatically. However, it is important to remember to delete this code for the development environment when publishing it online.

Turn on module thermal replacement.

To achieve real-time preview, DevServer supports a technology called Hot Module Replacement in addition to refreshing the entire web page introduced in the automatic refresh, which can achieve ultra-sensitive real-time preview without refreshing the entire web page. The principle is that when a source code changes, only the changed module will be recompiled, and then the corresponding old module in the browser will be replaced with the newly output module.

The advantages of module thermal replacement technology include:

  • Real-time preview has faster response and shorter waiting time.
  • Without refreshing the browser, the running state of the current web page can be maintained. For example, in applications that use Redux to manage data, hot replacement of modules can ensure that the data in Redux remains unchanged when the code is updated.

In general, module thermal replacement technology greatly improves the development efficiency and experience.

Principle of module thermal replacement

The principle of module hot replacement is similar to that of automatic refresh. A proxy client needs to be injected into the web page to be developed to connect DevServer with the web page. The difference lies in the unique module replacement mechanism of module hot replacement.

DevServer does not turn on module hot-swap mode by default. To turn on this mode, you only need to bring parameters at startup.--hotThe complete command iswebpack-dev-server --hot.

Except by taking it on startup--hotParameters can also be implemented by accessing Plugin. The relevant codes are as follows:

const HotModuleReplacementPlugin = require('webpack/lib/HotModuleReplacementPlugin');

module.exports = {
    // 为每个入口都注入代理客户端
    main:['webpack-dev-server/client?http://localhost:8080/', 'webpack/hot/dev-server','./src/main.js'],
  plugins: [
    // 该插件的作用就是实现模块热替换,实际上当启动时带上 `--hot` 参数,会注入该插件,生成 .hot-update.json 文件。
    new HotModuleReplacementPlugin(),
    // 告诉 DevServer 要开启模块热替换模式
    hot: true,      

Bring the parameters when starting the Webpack.--hotIn fact, it is to automatically complete the above configuration for you.

Compared with the auto-refresh proxy client, there are three more files for module hot replacement, which means the proxy client is larger.

Visible patch containsmain.cssThe new CSS code compiled from the file immediately changes the style of the web page to that described in the source code.

But when you modify itmain.jsFile, it will be found that module hot replacement did not take effect, but the entire page was refreshed, why is this the case when modifying the main.js file?

In order to enable users to flexibly control the logic when old modules are replaced when using the module hot replacement function, Webpack can define some codes in the source code to do corresponding processing.

Themain.jsThe document should read as follows:

<p data-height=”365″ data-theme-id=”0″ data-slug-hash=”QreOEw” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”main.js” class=”codepen”>See the Penmain.jsby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

thereinmodule.hotIt is the logic that is injected into the global API for controlling the hot replacement of the module after the hot replacement of the module is started.

Revise nowAppComponent.jsFileHello,WebpackChange toHello,World, you will find that the module hot replacement has taken effect. But when you editmain.jsYou will find that the whole web page has been refreshed. Why would modifying these two documents have different performance?

When the submodule is updated, the update events are passed up layer by layer, that is, fromAppComponent.jsFile passed tomain.jsFile, until there is a layer of files accepted the current change module, that ismain.jsAs defined in the filemodule.hot.accept(['./AppComponent'], callback), this is calledcallbackFunction to execute custom logic. If the event has been thrown up to the outermost layer and there is no document to accept it, the webpage will be refreshed directly.

Then why is there no place to accept it.cssFile, but modify all.cssDo files trigger hot module replacement? The reason is …style-loaderCode to accept CSS is injected.

Please don’t use module thermal replacement technology in online environment, it is specially designed to improve development efficiency.

Optimization module thermal replacement

Updated modules: 68 means that the module with ID 68 has been replaced, which is unfriendly to the developer, because the developer does not know the corresponding relationship between ID and module, and it is better to output the name of the replaced module. The NamedModulesPlugin plug-in built into the Webpack can solve this problem by modifying the Webpack configuration file to access the plug-in:

const NamedModulesPlugin = require('webpack/lib/NamedModulesPlugin');

module.exports = {
  plugins: [
    // 显示出被替换模块的名称
    new NamedModulesPlugin(),

In addition, module hot replacement also faces the same performance problems as automatic refresh, because they all need to monitor file changes and inject into the client. To optimize the building performance of module hot replacement, the thinking is very similar to that mentioned in using automatic refresh: listen to fewer files and ignore themnode_modulesFiles in the directory. But the shutdown mentioned therein defaults toinlineThe optimization method of manually injecting the mode into the proxy client cannot be used in the case of using module hot replacement, because the operation of module hot replacement depends on including the code of the proxy client in each Chunk.

Distinguishing environment

Why do we need to distinguish the environment

When developing web pages, there are usually multiple operating environments, such as:

  1. An environment that facilitates development and debugging during the development process.
  2. Published online to the user’s operating environment.

Although the two different environments are compiled from the same source code, the content of the code is different. The differences include:

  • The online code is compressed by the method mentioned in the compression code.
  • The development code contains some prompt logs for prompting developers, which ordinary users cannot see.
  • The address of the back-end data interface connected by the development code may also be different from the online environment, because the impact on the online data during the development process should be avoided.

In order to reuse the code as much as possible, we need to output different codes according to the environment in which the target code will run during the construction process. We need a set of mechanisms to distinguish the environment in the source code. Fortunately, Webpack has already achieved this for us.

How to Distinguish Environment

The specific distinction method is very simple, in the source code through the following ways:

if (process.env.NODE_ENV === 'production') {
} else {

The general principle is to judge which branch to execute by means of the values of environment variables.

When there is usage in your codeprocessModule statement, the Webpack is automatically packed intoprocessModule code to support non-node.js operating environment. When not used in your codeprocessIt will not be packed inprocessThe code of the module. This injected process module is used to simulate the functions in Node.jsprocessTo support the aboveprocess.env.NODE_ENV === 'production'Statement.

When building the online environment code, you need to set environment variables for the current running environment.NODE_ENV = 'production',WebpackThe relevant configuration is as follows:

const DefinePlugin = require('webpack/lib/DefinePlugin');

module.exports = {
  plugins: [
    new DefinePlugin({
      // 定义 NODE_ENV 环境变量为 production
      'process.env': {
        NODE_ENV: JSON.stringify('production')

Note when defining the value of an environment variableJSON.stringifyThe reason for wrapping the string is that the value of the environment variable needs to be a string wrapped in double quotation marks, whereasJSON.stringify('production')The value of is exactly equal to'"production"'.

After executing the build, you will find the following code in the output file:

if (true) {
} else {

The values of the defined environment variables are substituted into the source code.process.env.NODE_ENV === 'production'It has been directly replaced bytrue. And due to the access at this timeprocessThe statement of is replaced without, and the Webpack will not be packed intoprocessModule.

The environment variables defined by DefinePlugin are only valid forWebpackThe code to be processed is valid without affecting the value of the environment variable at the Node.js runtime.

Environment variables defined through Shell scripts, such asNODE_ENV=production webpack,WebpackIt is not recognized, and it has no effect on the environment distinguishing statement in the code that the Webpack needs to process.

In other words, the above-mentioned environment distinction statement can work normally only by defining environment variables through DefinePlugin, and it is not necessary to define it again through Shell script.

If you want the Webpack to use environment variables defined through Shell scripts, you can useEnvironmentPlugin, code is as follows:

new webpack.EnvironmentPlugin(['NODE_ENV'])

The above code is actually equivalent to:

new webpack.DefinePlugin({
  'process.env.NODE_ENV': JSON.stringify(process.env.NODE_ENV),

Combined with UglifyJS

In fact, the above output code can be further optimized becauseif(true)Statement will always only execute the code in the previous branch, that is to say, the best output should be directly:


Webpack does not realize the function of removing dead code, but UglifyJS can do this. please read how to use it.Compressed codeCompressed JavaScript in.

Environmental Differentiation in Third-Party Libraries

In addition to the environment-specific code in the source code written by myself, many third-party libraries have also made environment-specific optimization. Take React as an example, it has made two sets of environmental distinctions, namely:

  1. Development environment: includes type checking, HTML element checking, etc. Warning log code for developers.
  2. Online environment: All code for developers has been removed, leaving only the parts that allow React to run normally to optimize size and performance.

For example, React source code contains a large number of codes like the following:

if (process.env.NODE_ENV !== 'production') {
  warning(false, '%s(...): Can only update a mounted or mounting component.... ')

If you don’t define itNODE_ENV=productionThen these warning logs will be included in the output code and the output file will be very large.

process.env.NODE_ENV ! == 'production'hit the targetNODE_ENVAnd'production'The two values are the conventions of the community. This judgment statement is usually used to distinguish the development environment from the online environment.

Compressed code

The JavaScript and CSS resources obtained by the browser when accessing the web page from the server are in the form of text, and the larger the file, the longer the loading time of the web page. In order to improve the speed of web page acceleration and reduce network traffic, these resources can be compressed. In addition to the compression method can be throughGZIPThe algorithm not only compresses files, but also compresses text itself.

The compression of the text itself not only has the advantage of improving the loading speed of web pages, but also has the function of confusing source codes. Due to the poor readability of the compressed code, even if others download the code of the webpage, it will greatly increase the difficulty of code analysis and transformation.

Let’s introduce how to compress the code in the Webpack one by one.

Compressed JavaScript

At present, the most mature JavaScript code compression tool is UglifyJS, which can analyze the JavaScript code syntax tree and understand the code meaning, so as to achieve optimization such as removing invalid codes, removing log output codes, shortening variable names, etc.

To access UglifyJS in the Webpack, it needs to be in the form of plug-ins. At present, there are two mature plug-ins, namely:

  • UglifyJsPlugin: Compression is achieved by encapsulating UglifyJS.
  • ParallelUglifyPlugin: multi-process parallel processing compression, usingParallelUglifyPluginIt is described in detail in.

Since ParallelUGLIFYLUGIN was introduced in 4-4 using ParallelUGLIFYLUGIN and will not be repeated again, this article focuses on how to configure UglifyJS to achieve the best compression effect.

UglifyJS provides a lot of choices for configuring which rules to use in the compression process, all of which can be seen in its official documents. As there are many options, I will pick out some common ones and explain their application in detail:

  • sourceMap: whether to generate the corresponding Source Map for the compressed code is not generated by default, which will greatly increase the time consumption after opening. Generally, the Source Map of the compressed code will not be sent to the browser of the website user, but will be used by internal developers when debugging online code.
  • beautify: whether to output readable code, i.e. spaces and tabs will be retained. the default is, in order to achieve better compression effect, it can be set to false.
  • comments: whether to keep the comments in the code is reserved by default. in order to achieve better compression effect, it can be set tofalse.
  • compress.warnings: Whether to output a warning message when UglifyJs deletes unused codes. The default is output, which can be set tofalseIn order to close these little warnings.
  • drop_console: Do you want to exclude all of the codesconsoleStatement, the default is not rejected. After opening, it can not only improve the code compression effect, but also be compatible with unsupported code.consoleStatement IE browser.
  • collapse_vars: Are variables defined but used only once embedded, such asvar x = 5; y = xConvert toy = 5, the default is no conversion. In order to achieve better compression effect, can be set tofalse.
  • reduce_vars: whether to extract static values that occur many times but are not defined as variables to reference, such asx = 'Hello'; y = 'Hello'Convert tovar a = 'Hello'; x = a; y = b, the default is no conversion. In order to achieve better compression effect, can be set tofalse.

In other words, on the premise of not affecting the correct execution of the code, the optimized code compression configuration is as follows:

const UglifyJSPlugin = require('webpack/lib/optimize/UglifyJsPlugin');

module.exports = {
  plugins: [
    // 压缩输出的 JS 代码
    new UglifyJSPlugin({
      compress: {
        // 在UglifyJs删除没有用到的代码时不输出警告
        warnings: false,
        // 删除所有的 `console` 语句,可以兼容ie浏览器
        drop_console: true,
        // 内嵌定义了但是只用到一次的变量
        collapse_vars: true,
        // 提取出出现多次但是没有定义成变量去引用的静态值
        reduce_vars: true,
      output: {
        // 最紧凑的输出
        beautify: false,
        // 删除所有的注释
        comments: false,

From the above configuration, it can be seen that the Webpack has built-in UglifyJsPlugin. It should be pointed out that UglifyJsPlugin currently uses UglifyJS2 instead of the old UglifyJS1. The two versions of UglifyJS are different in configuration. Please pay attention to the version when reading documents.

In addition, the Webpack also provides a more convenient way to access UglifyJSPlugin, which is brought directly when starting the Webpack.--optimize-minimizeParameters, i.e.webpack --optimize-minimize, so that the Webpack will automatically inject you with a UglifyJSPlugin with default configuration.

Compress ES6

Although most JavaScript engines currently do not fully support the new features in ES6, ES6 code can already be directly executed under some specific operating environments, such as the latest version of Chrome and ReactNative’s engine JavaScriptCore.

The code running ES6 has the following advantages over the converted ES5 code:

  • The same logic uses ES6 to implement less code than ES5.
  • The JavaScript engine has optimized the syntax in ES6, for example, forconstDeclared variables have faster read speeds.

Therefore, if the operating environment permits, we should use native ES6 code to run as much as possible, instead of converted ES5 code.

When you compress ES6 code with the compression method described above, you will find UglifyJS will report an error exit because UglifyJS only knows ES5 syntax code. In order to compress ES6 code, UglifyES specific to ES6 code needs to be used.

UglifyES and UglifyJS come from different branches of the same project. Their configuration items are basically the same, but they are different when accessing the Webpack. When accessing UglifyES to a Webpack, the built-in UglifyJsPlugin cannot be used, but the latest version needs to be installed and used separately.uglifyjs-webpack-plugin. The installation method is as follows:

npm i -D uglifyjs-webpack-plugin@beta

Webpack related configuration codes are as follows:

<p data-height=”465″ data-theme-id=”0″ data-slug-hash=”ELqbWw” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”Webpack” class=”codepen”>See the PenWebpackby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

At the same time, in order not to letbabel-loaderThe code that outputs ES5 syntax needs to be removed.babelrcIn the configuration filebabel-preset-envBut other Babel plug-ins, such asbabel-preset-reactStill want to keep, because it isbabel-preset-envResponsible for converting ES6 code into ES5 code.

Compressed CSS

CSS codes can also be compressed like JavaScript to improve loading speed and code confusion. At present, the mature and reliable CSS compression tool is cssnano, which is based on PostCSS.

cssnanoCan understand the meaning of CSS code, not just delete spaces, such as:

  • margin: 10px 20px 10px 20pxBe compressed intomargin: 10px 20px
  • color: #ff0000Be compressed intocolor:red

There are also many compression rules that can be checked on its official website, usually the compression rate can reach 60%.

ThecssnanoAccessing to the Webpack is also very simple becausecss-loaderIt has already been built in. To open itcssnanoTo compress the code only needs to be turned on.css-loaderTheminimizeOptions. The relevant Webpack configurations are as follows:

<p data-height=”565″ data-theme-id=”0″ data-slug-hash=”rvXYwm” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”cssnano” class=”codepen”>See the Pencssnanoby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

CDN acceleration

Although the method of compressing code was adopted to reduce the size of network transmission, in fact, the most influential factor for the user experience was the loading wait when the web page was first opened. The root cause of this problem is the time-consuming process of network transmission. CDN’s function is to speed up network transmission.

CDN is also calledContent distribution networkBy deploying resources to all parts of the world, users can obtain resources from the server closest to the user according to the principle of proximity when accessing, thus accelerating the speed of obtaining resources. CDN actually improves the network speed by optimizing the problems of limited light speed and packet loss in the transmission process of physical link layer. Its general principle can be as follows:

In this section, you don’t need to understand the specific operation process and implementation principle of CDN. You can simply regard CDN service as a faster HTTP service. At present, many large companies will set up their own CDN services. Even if you don’t have the resources to set up a set of CDN services, all major cloud service providers provide CDN services on a volume basis.

Access CDN

In order to access CDN to websites, static resources of web pages need to be uploaded to CDN services. When these static resources are served, they need to be accessed through URL addresses provided by CDN services.

For a detailed example, there is a single-page application that constructs the following code structure:

|-- app_9d89c964.js
|-- app_a6976b6d.css
|-- arch_ae805d49.png
`-- index.html

among themindex.htmlThe content is as follows:

  <meta charset="UTF-8">
  <link rel="stylesheet" href="app_a6976b6d.css">
<div id="app"></div>
<script src="app_9d89c964.js"></script>

app_a6976b6d.cssThe content is as follows:

body{background:url(arch_ae805d49.png) repeat}h1{color:red}

It can be seen that when importing resources, they are accessed through relative paths. When all these resources are put into the same CDN service, the webpage can be used normally. However, it should be noted that CDN services usually open a long-term cache for resources, for example, users get it from CDNindex.htmlAfter this file, even after the release operationindex.htmlThe file was overwritten again, but the user was still running the previous version for a long time, which would result in the new release not taking effect immediately.

To avoid the above problems, the industry’s more mature approach is this:

  • For HTML files: Do not open the cache, put HTML on your own server instead of CDN service, and close the cache on your own server at the same time. Your server only provides HTML files and data interfaces.
  • For static JavaScript, CSS, pictures and other files: open CDN and cache, upload to CDN service, and bring Hash value calculated from file content to each file name, such as aboveapp_a6976b6d.cssDocuments. The reason for bringing the Hash value is that the file name will change with the file content. As long as the file changes, its corresponding URL will change and it will be downloaded again, no matter how long the cache time is.

After the above scheme is adopted, the resource introduction address in the HTML file also needs to be replaced by the address provided by the CDN service, such as the aboveindex.htmlBecome as follows:

  <meta charset="UTF-8">
  <link rel="stylesheet" href="//cdn.com/id/app_a6976b6d.css">
<div id="app"></div>
<script src="//cdn.com/id/app_9d89c964.js"></script>

Andapp_a6976b6d.cssThe content of should also read as follows:

In other words, the previous relative paths have all become absolute URL addresses pointing to CDN services.

If you’re interested in what looks like//cdn.com/id/app_a6976b6d.cssThis kind of URL is strange, you need to know that this kind of URL saves the previous one.http:Or ..https:Prefix, the advantage of this is that when accessing these resources, it will automatically decide whether to use HTTP or HTTPS mode according to the current HTML URL mode.

In addition, if you also know that browsers have a rule that concurrent requests for resources of the same domain name are limited at the same time (the specific number is about 4 or so, and different browsers may be different), you will find that there is a big problem with the above approach. Since all static resources are placed under the same CDN service domain name, that is, the abovecdn.com. If there are many resources on the web page, such as many pictures, the loading of resources will be blocked, because only a few can be loaded at the same time, and the loading cannot continue until other resources have been loaded. To solve this problem, these static resources can be distributed to different CDN services, such as JavaScript filesjs.cdn.comUnder Domain Name, Place CSS Files incss.cdn.comUnder the domain name, picture files are placedimg.cdn.comUnder the domain name, after doing soindex.htmlNeed to become like this:

  <meta charset="UTF-8">
  <link rel="stylesheet" href="//css.cdn.com/id/app_a6976b6d.css">
<div id="app"></div>
<script src="//js.cdn.com/id/app_9d89c964.js"></script>

The use of multiple domain names will bring about a new problem: increasing the time for domain name resolution. Whether to adopt multi-domain name to disperse resources needs to be measured according to one’s own needs. Of course, you can pre-resolve domain names by adding < linkrel = “DNS-prefetch” href = “//js.cdn.com” > to HTML HEAD tags to reduce the delay caused by domain name resolution.

Realizing CDN access with Webpack

To sum up the above, the construction needs to realize the following points:

  • The import URL of a static resource needs to become the URL pointing to the absolute path of the CDN service instead of the URL relative to the HTML file.
  • The file names of static resources need to be accompanied by Hash values calculated from the file contents to prevent them from being cached.
  • Different types of resources are put on CDN services with different domain names to prevent parallel loading of resources from being blocked.

Let’s first look at the final Webpack configuration to achieve the above requirements:

< pdata-height = “565” data-theme-id = “0” data-slug-hash = “elqbwb” data-default-tab = “js” data-user = “whjin” data-embedded-version = “2” data-pen-title = “cdn access” class=”codepen”>See the PenCDN accessby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

The core part of the above code is throughpublicPathThe parameters set the URL of the CDN directory where static resources are stored. In order to allow different types of resources to be output to different CDNs, it is necessary to set the URL in:

  • output.publicPathSet the JavaScript address in.
  • css-loader.publicPathSet the address of the resource imported by CSS in.
  • WebPlugin.stylePublicPathSet the address of the CSS file in.

Set uppublicPathAfter that, WebPlugin generates HTML files andcss-loaderWhen converting CSS code, thepublicPathTo replace the original relative address with the corresponding online address.

Using Tree Shaking

Tree Shaking can be used to eliminate dead code that is not used in JavaScript. It relies on static ES6 modular syntax, such as throughimportAndexportImport and export. Tree Shaking first appeared in Rollup and was introduced in version 2.0 by Webpack.

In order to understand it more intuitively, let’s look at a specific example. If there is a fileutil.jsThere are many tool functions and constants in themain.jsWill be imported and used inutil.js, code is as follows:

util.jsSource code:

export function funcA() {

export function funB() {

main.jsSource code:

import {funcA} from './util.js';

After Tree Shakingutil.js

export function funcA() {

Because only usedutil.jshit the targetfuncA, so the rest have been Tree Shaking as dead code to eliminate.

It should be noted that the premise for Tree Shaking to work properly is that the JavaScript code handed to the Webpack must adopt ES6 modular syntax, because ES6 modular syntax is static (the path in the import and export statement must be a static string and cannot be put into other code blocks), which allows the Webpack to simply analyzeexportThe quilt ofimportOver. If you use modularity in ES5, for examplemodule.export={...}require(x+y)if(x){require('./util')}, Webpack cannot analyze which codes can be eliminated.

Access Tree Shaking

The above describes what Tree Shaking does, and next steps will show you how to configure Webpack to make Tree Shaking effective.

First of all, in order to hand over ES6 modular code to Webpack, Babel needs to be configured to keep ES6 modular statements and modify them..babelrcThe document is as follows:

  "presets": [
        "modules": false

among them"modules": falseThe meaning is to turn off Babel’s module conversion function and retain the original ES6 modular syntax.

After Babel is configured, rerun the Webpack and bring it with you when you start the Webpack.--display-used-exportsParameter to facilitate tracking the work of Tree Shaking, then you will find the following log output in the console:

> webpack --display-used-exports
bundle.js  3.5 kB       0  [emitted]  main
   [0] ./main.js 41 bytes {0} [built]
   [1] ./util.js 511 bytes {0} [built]
       [only some exports used: funcA]

Among them, [only some exports used: funcA] suggests that util.js exported only the funcA used, which shows that the Webpack did correctly analyze how to eliminate dead codes.

But when you open the Webpack outputbundle.jsWhen you look at the file, you will find that the useless code is still in it, as follows:

/* harmony export (immutable) */
__webpack_exports__["a"] = funcA;

/* unused harmony export funB */

function funcA() {

function funB() {

The Webpack only points out which functions are used and which are useless. To eliminate the useless codes, UglifyJS has to process them again. It is also very simple to access UglifyJS, not only by adding UglifyJSPlugin as described in the 4-8 compressed code, but also by simply taking it with you when starting the Webpack--optimize-minimizeParameters, in order to quickly verify Tree Shaking, we use the simpler latter to experiment.

viawebpack --display-used-exports --optimize-minimizeAfter restarting the Webpack, open the newly exportedbundle.js, which reads as follows:

function r() {

t.a = r

It can be seen that Tree Shaking did indeed do it, and all unnecessary codes were eliminated.

When your project uses a large number of third-party libraries, you will find that Tree Shading does not seem to be effective, because most of the code in Npm uses the CommonJS syntax, which causes Tree Shading to fail to work properly and is degraded. Fortunately, some libraries take this into account. When they are released to Npm, they will provide two codes at the same time, one using CommonJS modular syntax and the other using ES6 modular syntax. And atpackage.jsonThe document points out the entry points of the two codes respectively.

In order toreduxLibrary, for example, its directory structure published to Npm is:

|-- es
|   |-- index.js # 采用 ES6 模块化语法
|-- lib
|   |-- index.js # 采用 ES5 模块化语法
|-- package.json

package.jsonThere are two fields in the file:

  "main": "lib/index.js", // 指明采用 CommonJS 模块化的代码入口
  "jsnext:main": "es/index.js" // 指明采用 ES6 模块化的代码入口

mainFieldsUsed to configure which field to use as the module’s entry description. In order for Tree Shaking to be rightreduxTo take effect, the file search rules that need to be configured with Webpack are as follows:

module.exports = {
  resolve: {
    // 针对 Npm 中的第三方模块优先采用 jsnext:main 中指向的 ES6 模块化语法的文件
    mainFields: ['jsnext:main', 'browser', 'main']

The above configuration means priority usejsnext:mainAs an entrance, if it does not existjsnext:mainUse itbrowserOr ..mainAs an entrance. Although not every third-party module in Npm will provide ES6 modular syntax code, the ones that can be optimized will be optimized.

At present, more and more third-party modules in Npm consider Tree Shaking and provide support for it. Usejsnext:mainAs the entrance of ES6 modular code is a community agreement, if you want to publish a library to Npm in the future, I hope you can support Tree Shading, so that Tree Shading can play a greater optimization effect and benefit more people.

Extract common code

Why do you need to extract public codes

Large websites usually consist of multiple pages, each of which is an independent single-page application. However, because all pages adopt the same technology stack and use the same set of style codes, there are many identical codes between these pages.

If the code of each page includes these common parts, it will cause the following problems:

  • The same resources are repeatedly loaded, wasting user traffic and server costs;
  • The resources that need to be loaded for each page are too large, which leads to slow loading of the first screen of the page and affects the user experience.

If the code common to multiple pages is separated into separate files, the above problems can be optimized. The reason is that if the user visits one of the web pages of the website, the probability of visiting other web pages under this website will be very high. After the user accesses for the first time, the files of the public codes of these pages have been cached by the browser. When the user switches to other pages, the files storing the public codes will not be reloaded, but will be directly retrieved from the cache. This has the following benefits:

  • Reduce network transmission flow and server cost;
  • Although the speed of users opening the website for the first time is not optimized, the speed of accessing other pages will be greatly improved afterwards.

How to Extract Common Codes

You already know the benefits of extracting public code, but how do you do it in actual combat to achieve the best results? Usually you can use the following principles to extract public codes for your website:

  • According to the technology stack that your website uses, find out the basic library that all pages of the website need to use. Take the website that uses React technology stack as an example, all pages will depend on it.reactreact-domWait for the library and extract them into a separate file. This file is generally called base.js because it contains the basic running environment of all web pages.
  • In the exclusion of each page isbase.jsIncluding part of the code, and then find out all the pages are dependent on the common part of the code extracted intocommon.jsGo.
  • A separate file is generated for each web page, which no longer containsbase.jsAndcommon.jsIt contains only the part of the code that each page needs separately.

The structure diagram between documents is as follows:

If you read this, you may have doubts: since you can find out the common codes that all pages depend on, and extract them and put them incommon.jsWhy do you need to extract the basic library needed by all pages of the websitebase.jsWhere are you going? The reason is for long-term cachingbase.jsThis document.

The files released online will adopt the method introduced in the 4-9CDN acceleration, and the file names of static files will be appended with Hash values calculated according to the file contents, which is the final result.base.jsThe file name of will becomebase_3b1682ac.jsTo cache files for a long time. Websites are usually updated and published continuously, and each publication will result incommon.jsAnd the JavaScript file of Hash webpage will be updated due to the change of the file content, that is, the cache will be updated.

Extract the basic library needed by all pages tobase.jsThe advantage is that as long as the version of the base library is not upgraded,base.jsThe contents of the file will not change, the Hash value will not be updated, and the cache will not be updated. Every time you publish a browser, you use the cachedbase.jsFile without downloading it again.base.jsDocuments. Due tobase.jsIt is usually very large, which can greatly improve the speed of web page acceleration.

How to extract public code through Webpack

You already know how to extract public code, and then you will be taught how to implement it with Webpack.

The Webpack has built-in plug-ins specifically designed to extract common parts from multiple Chunk.CommonsChunkPlugin,CommonsChunkPluginThe general usage method is as follows:

const CommonsChunkPlugin = require('webpack/lib/optimize/CommonsChunkPlugin');

new CommonsChunkPlugin({
  // 从哪些 Chunk 中提取
  chunks: ['a', 'b'],
  // 提取出的公共部分形成一个新的 Chunk,这个新 Chunk 的名称
  name: 'common'

The above configuration can separate the common parts from the web pages A and B and put them intocommonChina.

Each CommonsChunkPlugin instance generates a new Chunk, which contains the extracted code and must be specified during use.nameProperty to tell the plug-in the name of the newly generated Chunk. among themchunksThe attribute indicates which existing Chunks to extract from. If this attribute is not filled in, it will be extracted from all known Chunks by default.

Chunk is a collection of files. A Chunk contains the Chunk’s entry file and the files on which the entry file depends.

Common Chunk output through the above configuration contains the base runtime on which all pages dependreactreact-domTo remove the base runtime from thecommonTo pull away frombaseIn order to get there, we still need to do some processing.

First of all, you need to configure a Chunk, which only depends on the base library on which all pages depend and the styles used by all pages. Therefore, you need to write a file in the project.base.jsTo describe the modules on which base Chunk depends, the contents of the file are as follows:

// 所有页面都依赖的基础库
import 'react';
import 'react-dom';
// 所有页面都使用的样式
import './base.css';

Then modify the Webpack configuration, and atentryAdd inbase, the relevant amendments are as follows:

module.exports = {
  entry: {
    base: './base.js'

This completes the configuration of the new Chunk base.

In order to extract from commonbaseAlso includes the part, also needs to configure oneCommonsChunkPlugin, the relevant code is as follows:

new CommonsChunkPlugin({
  // 从 common 和 base 两个现成的 Chunk 中提取公共的部分
  chunks: ['common', 'base'],
  // 把公共的部分放到 base 中
  name: 'base'

Due tocommonAndbaseThe public part isbaseAt present already contains the part, therefore after this configurationcommonWill become smaller, andbaseIt will remain unchanged.

After all of the above are configured and re-executed, you will get four files, which are:

base.js: code composed of base libraries on which all web pages depend;
common.js: both pages a and b are required, but they are not available.base.jsThe code that appeared in the file;
a.js: code required by web page a alone;
b.js: code required by web page b alone.
In order to make the webpage run normally, the websiteAFor example, you need to include the following files in its HTML in the following order to make the web page work properly:

<script src="base.js"></script>
<script src="common.js"></script>
<script src="a.js"></script>

This completes all the steps needed to extract the common code.

For CSS resources, the above theories and methods are equally effective, that is to say, you can do the same optimization for CSS files.

The above methods may appearcommon.jsThere is no code in because it is difficult to find modules that all pages will use without the basic runtime. When this happens, you can do one of the following:

  • CommonsChunkPlugin provides an optionminChunksA that indicates the minimum number of occurrences in the specified Chunks required for the file to be extracted. IfminChunks=2、chunks=['a','b','c','d'], any file as long as in['a','b','c','d']This file will be extracted if it appears in any two or more Chunk in. You can adjust according to your needs.minChunksThe value of,minChunksThe smaller, the more files will be extractedcommon.jsHowever, this will also cause some pages to load more irrelevant resources.minChunksThe larger and fewer files will be extractedcommon.jsIn the middle, but this will lead tocommon.jsSmaller, weaker effect.
  • According to the correlation between each pageCommonsChunkPluginTo extract the common parts of the selected pages instead of extracting the common parts of all the pages, and such operations can be superimposed multiple times. This will have a good effect, but the disadvantage is that the configuration is complicated. You need to think about how to configure it according to the relationship between the pages. This method is not universal.

This example providesProject complete code

Split code is loaded on demand

Why do I need to load on demand

With the development of the Internet, a web page needs to carry more and more functions. For websites that use single-page applications as the front-end architecture, they will face the problem of a large amount of code to load for a web page, because many functions are centralized in one HTML. This will lead to slow web page loading, interactive cartooning, and the user experience will be very bad.

The root cause of this problem is to load the codes corresponding to all functions at one time, but in fact users can only use some of them at each stage. Therefore, the solution to the above problem is to load only the code corresponding to the function, that is, the so-called on-demand load, for whatever function the user needs currently.

How to Use Load on Demand

When optimizing on-demand loading for single-page applications, the following principles are generally adopted:

  • Divide the entire website into small functions, and then divide them into several categories according to the degree of relevance of each function.
  • Merge each class into a Chunk and load the corresponding Chunk as needed.
  • For the functions corresponding to the pictures that users need to see when opening your website for the first time, do not load them on demand, but put them into the Chunk where the execution portal is located, so as to reduce the webpage loading time that users can perceive.
  • For individual function points that depend on a large amount of code, such as dependencyChart.jsTo draw graphs, rely onflv.jsTo play the video function point, it can be loaded on demand.

The loading of the divided code needs a certain time to trigger, that is, when the user operates or will operate to the corresponding function, the corresponding code will be loaded. The loading timing of the divided code needs developers to measure and determine according to the requirements of the web page.

As the code divided for on-demand loading also takes time in the loading process, you can predict what the user may do next and load the corresponding code in advance, thus making the user not aware of the network loading time.

Load on demand with Webpack

The Webpack has built-in powerful function of dividing code to realize on-demand loading, which is very simple to implement.

For example, it is now necessary to make such a web page optimized for on-demand loading:

  • The web page is loaded only when it is first loaded.main.jsDocuments, web pages will display a button,main.jsThe file contains only listening button events and loading code loaded on demand.
  • Only when the button is clicked can the divided ones be loadedshow.jsFile, load successfully before executionshow.jsFunctions in.

among themmain.jsThe document reads as follows:

window.document.getElementById('btn').addEventListener('click', function () {
  // 当按钮被点击后才去加载 show.js 文件,文件加载成功后执行文件导出的函数
  import(/* webpackChunkName: "show" */ './show').then((show) => {

show.jsThe document reads as follows:

module.exports = function (content) {
  window.alert('Hello ' + content);

The key sentence in the code isimport(/* webpackChunkName: "show" */ './show'), Webpack built-in pairimport(*)Statement, when a Webpack encounters a similar statement, it will do this:

  • In order to./show.jsCreate a new Chunk; for the portal;
  • When the code executes toimportThe file generated by Chunk will not be loaded until it is in the statement.
  • importReturns a Promise that can be used in Promise’sthenGets theshow.jsThe exported content.

In useimport()After splitting the code, your browser must also support the Promise API in order for the code to run normally, because import () returns a Promise, which depends on promise. For browsers that do not support Promise natively, you can inject Promise polyfill.

/* webpackChunkName: "show" */The meaning of is to give a name to the dynamically generated Chunk so that we can trace and debug the code. If you do not specify the name of the dynamically generated Chunk, the default name will be[id].js./* webpackChunkName: "show" */It is a new feature introduced in Webpack3. Before Webpack3, dynamically generated Chunk cannot be given a name.

For correct output in/webpackChunkName: “show”ChunkName configured in/also needs to be configured with the following Webpack:

module.exports = {
  // JS 执行入口文件
  entry: {
    main: './main.js',
  output: {
    // 为从 entry 中配置生成的 Chunk 配置输出文件的名称
    filename: '[name].js',
    // 为动态加载的 Chunk 配置输出文件的名称
    chunkFilename: '[name].js',

One of the most critical lines ischunkFilename: '[name].js',, It specifically specifies the file name of dynamically generated Chunk at the time of output. Without this line, the file name of the split code would be[id].js.

Load on demand and ReactRouter

In actual combat, it is impossible to have such a simple scene as above. Next, let’s give an example in actual combat: load and optimize the application using ReactRouter on demand. This example consists of a single-page application consisting of two sub-pages, switching and managing routes between the two sub-pages through ReactRouter.

The portal file for this one-page applicationmain.jsAs follows:

<p data-height=”565″ data-theme-id=”0″ data-slug-hash=”KROoWV” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”main.js” class=”codepen”>See the Penmain.jsby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

The most critical part of the above code isgetAsyncComponentFunction, its role is to cooperate with ReactRouter to load components on demand. Please refer to the comments in the code for the specific meaning.

Since the above source code needs to be converted by Babel before it can run normally in the browser, the corresponding source code needs to be configured in the Webpackbabel-loader, the source code to firstbabel-loaderAfter processing, it will be handed over to Webpack for processing.import(*)Statement. But after doing so, you will soon find a problem: Babel made a mistake and said he did not know it.import(*)Grammar. The reason for this problem isimport(*)Grammar has not been added to the ECMAScript standard mentioned in ES6, so we need to install a Babel plug-inbabel-plugin-syntax-dynamic-importAnd add it to the.babelrcGo to:

  "presets": [
  "plugins": [

After performing the Webpack build, you will find that three files have been output:

  • main.js: The code block where the execution entry is located, and also includesPageHomeThe required code, because users need to see it when they first open the web page.PageHomeTherefore, it will not be loaded on demand to reduce the loading time that users can perceive.
  • page-about.js: when a user accesses/aboutCode blocks that will only be loaded when the;
  • page-login.js: when a user accesses/loginCode blocks that will only be loaded when the.

At the same time, you will also find thatpage-about.jsAndpage-login.jsThese two files will not be loaded on the first page, but will not be loaded until you switch to the corresponding sub-page.

Use Prepack

In the previous optimization method, code compression and blocking are mentioned. These are all optimization at the network loading level. In addition, the efficiency of code at runtime can be optimized.PrepackIs born for this.

Prepack is open source by Facebook, which adopts a more radical method: under the condition of keeping the running results consistent, it changes the running logic of the source code and outputs JavaScript code with higher performance. In fact, Prepack is a partial evaluator, which puts the calculation results into the compiled code in advance when compiling the code, instead of evaluating it at code run time.

Take the following source code as an example:

import React, {Component} from 'react';
import {renderToString} from 'react-dom/server';

function hello(name) {
  return 'hello ' + name;

class Button extends Component {
  render() {
    return hello(this.props.name);

console.log(renderToString(<Button name='webpack'/>));

After being converted by Prepack, it was directly output as follows:

console.log("hello webpack");

It can be seen that Prepack can improve performance by executing the source code in advance in the compilation stage to obtain the execution result, and then directly outputting the operation result.

The working principle and process of Prepack are roughly as follows:

  • The JavaScript source code is parsed into an abstract syntax tree (AST) through Babel, so that the source code can be analyzed more finely.
  • Prepack implements a JavaScript interpreter to execute the source code. With the help of this interpreter Prepack, we can master how the source code is executed and return the results in the execution process to the output.

On the surface, it seems very beautiful, but in fact, Prepack is not mature and perfect enough. Prepack is still in the initial stage of development and has great limitations, such as:

  • DOM API and some Node.js API cannot be recognized. if there are API calls in the source code that depend on the running environment, Prepack will report an error.
  • There is a situation that the optimized code performance is lower instead.
  • The optimized code file size is greatly increased.

In a word, it is still too early to use Prepack in online environment.

Access Webpack

Prepack needs to optimize the final code before the Webpack outputs it, just like UglifyJS. Therefore, it is necessary to connect a new plug-in to the Prepack for the Webpack. Fortunately, some people in the community have already completed this plug-in:prepack-webpack-plugin.

Accessing this plug-in is very simple. The relevant configuration codes are as follows:

const PrepackWebpackPlugin = require('prepack-webpack-plugin').default;

module.exports = {
  plugins: [
    new PrepackWebpackPlugin()

Re-execute the build and you will see the pre-pack optimized code output.

Open scopehosting

Scope Hoisting can make the code files packaged by Webpack smaller and run faster. It is also translated as “Scope Promotion”, which is a newly introduced function in Webpack3. The name alone does not show what Scope Hoisting has done. Here’s a detailed introduction.

Let’s take a look at how the Webpack was packaged before scopehosting.

If there are now two documentsutil.js:

export default 'Hello,Webpack';

And entry filesmain.js:

import str from './util.js';

Some of the codes in the output after the above source code is packaged with Webpack are as follows:

  (function (module, __webpack_exports__, __webpack_require__) {
    var __WEBPACK_IMPORTED_MODULE_0__util_js__ = __webpack_require__(1);
  (function (module, __webpack_exports__, __webpack_require__) {
    __webpack_exports__["a"] = ('Hello,Webpack');

After Scope Hoisting is opened, part of the code output from the same source code is as follows:

  (function (module, __webpack_exports__, __webpack_require__) {
    var util = ('Hello,Webpack');

It can be seen from this that the function declaration has changed from two to one after opening Scope Hoisting.util.jsThe content defined in is directly injected intomain.jsIn the corresponding module. The advantage of this is:

  • The code volume is smaller because the function declaration statement will generate a large amount of code;
  • At run time, the code creates fewer function scopes, resulting in less memory overhead.

The implementation principle of Scope Hoisting is actually very simple: analyze the dependencies among modules and merge scattered modules into one function as much as possible, but only if code redundancy cannot be caused. Therefore, only those modules that have been referenced once can be merged.

Because Scope Hoisting needs to analyze the dependencies between modules, the source code must use ES6 modular statements, otherwise it will not take effect.

Using scopehosting

It is very simple to use Scope Hoisting in the Webpack, because this is a built-in function of the Webpack, and only one plug-in needs to be configured. The relevant code is as follows:

const ModuleConcatenationPlugin = require('webpack/lib/optimize/ModuleConcatenationPlugin');

module.exports = {
  plugins: [
    // 开启 Scope Hoisting
    new ModuleConcatenationPlugin(),

At the same time, considering that Scope Hoisting relies on the source code to adopt ES6 modular syntax, and also needs to configure mainFields. The reason was mentioned in the 4-10 use of TreeShaking: because most of the third-party libraries in Npm use the CommonJS syntax, but some libraries will provide ES6 modular code at the same time. In order to give full play to the role of Scope Hoisting, the following configuration needs to be added:

module.exports = {
  resolve: {
    // 针对 Npm 中的第三方模块优先采用 jsnext:main 中指向的 ES6 模块化语法的文件
    mainFields: ['jsnext:main', 'browser', 'main']

For codes with non-ES6 modular syntax, the Webpack will be downgraded without Scope Hoisting optimization. In order to know which codes have been downgraded by the Webpack, you can bring it with you when starting the Webpack.--display-optimization-bailoutParameter, so that the output log will contain logs similar to the following:

[0] ./main.js + 1 modules 80 bytes {0} [built]
    ModuleConcatenation bailout: Module is not an ECMAScript module

thereinModuleConcatenation bailoutI told you which file was downgraded for what reason.

In other words, the configuration to open Scope Hoisting and make the most of it is as follows:

const ModuleConcatenationPlugin = require('webpack/lib/optimize/ModuleConcatenationPlugin');

module.exports = {
  resolve: {
    // 针对 Npm 中的第三方模块优先采用 jsnext:main 中指向的 ES6 模块化语法的文件
    mainFields: ['jsnext:main', 'browser', 'main']
  plugins: [
    // 开启 Scope Hoisting
    new ModuleConcatenationPlugin(),

Output analysis

Although a lot of optimization methods have been introduced before, these methods cannot cover all scenarios, so you need to analyze the output results to determine the next optimization direction.

The most direct analysis method is to read the code output by the Webpack, but because the code output by the Webpack is very unreadable and the file is very large, it will make you very headache. In order to analyze the output results more simply and intuitively, many visual analysis tools have appeared in the community. These tools graphically display the results more intuitively so that you can quickly see the problem. Next, I’ll show you how to use these tools.

When starting the Webpack, two parameters are supported, namely:

  • --profile: Record the time-consuming information in the construction process;
  • --json: Output the construction result in JSON format, and only one will be output in the end.jsonFile, this file contains all information related to the construction.

Take the above two parameters with you when starting the Webpack. The starting command is as followswebpack --profile --json > stats.json, you will find one more item in the project.stats.jsonDocuments. This ..stats.jsonThe file is used for visual analysis tools described later.

webpack --profile --jsonJSON in string form is output.> stats.jsonIt is a pipeline command in UNIX/Linux system, meaning towebpack --profile --jsonThe output is output to through a pipeline.stats.jsonIn the file.

Official visual analysis tool

Webpack officials provide a visual analysis toolWebpack Analyse, which is an online Web application.

After you open the web page linked to Webpack Analyse, you will see a pop-up window prompting you to upload the JSON file, that is, you need to upload the abovestats.jsonFile, as shown in figure:

Webpack Analyse will not take your choicestats.jsonThe file is developed to the server, but parsed locally in the browser, so you don’t have to worry about your code leaking out. After selecting the file, you can immediately see the following effect diagram:

It is divided into six major plates, namely:

  • Modules: Show all modules, one file for each module. It also includes dependency graph, module path, module ID, Chunk to which the module belongs, and module size among all modules.
  • Chunks: Shows all code blocks. One code block contains multiple modules. It also includes ID, name, size of code blocks, number of modules each code block contains, and dependency graph between code blocks.
  • Assets: Shows all exported file resources, including.js.cssPictures, etc. And also includes the file name, size and code block from which the file comes;
  • Warnings: Show all warning messages during the construction process;
  • Errors: Show all error messages during the construction process;
  • Hints: Demonstrates the time-consuming process of processing each module.

Let’s take the project used in 3-10 management of multiple single-page applications as an example to analyze it.stats.jsonDocuments.

clickModules, check the module information, the effect diagram is as follows:

Due to the dependence on a large number of third-party modules and the large number of files, the dependency graph between modules is too dense to see clearly, but you can zoom in further.

clickChunks, check the code block information, the effect diagram is as follows:

Two page-level code blocks can be seen from the dependency graph between code blocks.loginAndindexRely on the extracted common code block common.

clickAssets, view the output file resources, the effect diagram is as follows:

clickHints, check the time-consuming distribution in the output process, the effect diagram is as follows:

From Hints, we can see the start time and end time of each file in the processing process, thus we can find out which file caused the slow construction.


webpack-bundle-analyzerIs another visual analysis tool, although it does not have as many functions as the official, it is more intuitive than the official.

First let’s look at its effect diagram:

It can easily let you know:

  • What does the packed file contain?
  • The proportion of the size of each file in the total, one can see at a glance which files are large in size;
  • The inclusion relationship between modules;
  • The size of each file after Gzip.

join upwebpack-bundle-analyzerThe method is very simple, the steps are as follows:

  1. Install webpack-bundle-analyzer to the global and execute the command.npm i -g webpack-bundle-analyzer;
  2. According to the method mentioned abovestats.jsonDocuments;
  3. Execute in project root directorywebpack-bundle-analyzerAfter that, the browser will open the corresponding webpage to see the above effect.

Optimization summary

This chapter explains how to optimize the configuration of Webpack in the project from the perspectives of development experience and output quality. These optimization methods are all accumulated experience from the actual project. Although each subsection is an independent optimization method, some optimization methods do not conflict and can be combined with each other to achieve the best effect.

The following is an example project that combines all optimization methods in this chapter. Since the construction speed and output quality cannot be both, two files are configured for the project according to the development environment and online environment, as follows:

Profiles that focus on optimizing the development experiencewebpack.config.js

<p data-height=”565″ data-theme-id=”0″ data-slug-hash=”pVMVgW” data-default-tab=”js” data-user=”whjin” data-embed-version=”2″ data-pen-title=”webpack-dist.config.js” class=”codepen”>See the Penwebpack-dist.config.jsby whjin (@whjin) onCodePen.</p>
<script async src=”https://static.codepen.io/ass …; ></script>

Although the optimization method introduced in this chapter is difficult to cover all aspects of Webpack, it is sufficient to solve common scenarios in actual combat. For scenes not introduced in this book, you need to optimize them according to your own needs according to the following ideas:

  1. Find out the cause of the problem;
  2. Find a way to solve the problem;
  3. Find a Webpack integration solution to solve the problem.

At the same time, you also need to follow the iteration of the community, learn other people’s optimization methods, understand the latest Webpack features and the newly emerged plug-ins, Loader.