At first glance, a considerable part of this book is very old, and many optimization methods have lost their necessity under the blessing of today’s high-speed network environment and advanced browsers. However, as a person with cleanliness, one cannot allow one’s own code to be “about as good as it is”, and I also believe that anyone who has pursuit wants his own works to be kept improving, so this book is still of great learning significance. Regardless of the subjective, when reading some excellent open source libraries, I saw some other people’s codes that I didn’t understand very much. After reading this book, I remembered and sighed with emotion. Originally, people did this to improve performance.
The book is divided into ten chapters. I will write my reading notes one by one according to the sequence of chapters in the book.
I loading and execution
Put js on
</body>Before the end of the tag instead of
<head></head>It is common knowledge that the inside of the tag can avoid browser blocking and improve user experience. Behind this common sense, a browser is involved.Single processThe concept of.
Although the network speed and browser efficiency have been greatly improved, with the rise of mobile terminals and front-end frameworks such as
ReactThis problem is still worthy of our attention.
II. Data Access
First of all, for data access, there is the following key:
Every js function will have a function called
[[Scope]]The internal attribute of, that is, the scope chain of the function, determines which data can be accessed by the function.
It is described in detail in the book.
closureSuch concepts will not be repeated here. In my own words, if a function uses a variable, it will find it from the nearest place, that is, the local variable defined inside the function. If it is not found, look for the global variable (or the upper scope) further away. It is precisely this process of “finding” that causes performance problems. The book uses “analytic identifier” to express the action of “finding”, and js performance decreases with the increase of analytic identifier depth, so in best practice, it is often to assign a deeper variable to a local variable and directly call the local variable inside the function to improve performance.
With the variables finished, the method arrived. In js, everything is an object, but js objects are based on prototypes, which leads to the concept of a prototype chain. Similar to the principle of parsing identifiers mentioned above, to call a method in an object, it will first look up from the object instance. If it cannot be found, it will look up step by step from near to far along its prototype chain, and its performance will also decline.
In addition, the book also discusses the issue of “nested members”. such as
window.location.href, it will find first
windowObject, and then find the nested
locationObject, find the inside again
hrefProperty, back and forth set of multiple layers, also has a certain cost in performance. Therefore, in the actual coding process, we often face the problem of nested members. We always remember to cache the values of the object members and use them after the execution is completed.
cacheObj = nullThe cache can be released in a way that can effectively improve performance, as shown in the following example:
// bad document.querySelector('.xxx').style.margin = 10 + 'px' document.querySelector('.xxx').style.padding = 10 + 'px' document.querySelector('.xxx').style.color = 'pink' // good let xxxStyle = document.querySelector('.xxx').style xxxStyle.margin = 10 + 'px' xxxStyle.padding = 10 + 'px' xxxStyle.color = 'pink' xxxStyle = null
DOM in browser
This chapter introduces a series of issues about dom operation in detail. The first thing to be clear is that dom operation is inherently slow. Why is this so? Because in the browser, there are two different mechanisms for handling html and js. They communicate through interfaces. To quote the original words in the book, html and js can be understood as two islands. They need a bridge to communicate with each other, and crossing the bridge will incur overhead in terms of time and cost, thus causing performance problems. This chapter comprehensively compares the speed of various methods by analyzing different dom operation functions.
Dom operations often cause redrawing and rearranging of browsers. Rearrangement refers to what happens when the layout and geometric properties of a page change. Redrawing refers to the process of drawing dom elements onto the screen.
What causes performance problems often comes from rearranging, because browsers need to recalculate the sizes and positions of all elements on the page and then place them in the right places. Therefore, to improve the performance of the page, a very important measure is to avoid page rearrangement.
It is worth noting that rearrangement is not only triggered when the size and position of the page elements are modified, but also the browser will start rearrangement to return the correct value when acquiring.
However, many times we have to directly manipulate dom, although they will cause rearrangement and redrawing. The book gives several solutions that can effectively improve performance. In fact, the method is similar to the above method about js caching local variables. It also reduces the search for dom element attributes through caching mechanism, and reduces the query and modification by modifying variables in batches and updating dom once.
In addition, it is also a good way to separate elements from the document flow. Because once an element leaves the document stream, its impact on other elements is almost zero, and the performance loss can be effectively limited to a small range.
After rearranging and redrawing, binding events to dom elements is also the culprit causing performance problems. Using the bubble or capture mechanism built into the browser, the number of event handlers can be reduced through event delegation, thus optimizing the performance better.
Four, algorithm and process control
This chapter first analyzes several circulation types and concludes that only
for-inLoops have the slowest performance because colleagues search for instance or prototype properties in each iteration, resulting in performance that is only 1/7 of that of other types.
Loops are very common in code. Since they cannot be avoided, performance needs to be improved by minimizing the number of loops and reducing the workload of each loop.
For conditional statements
if elseOr ..
switch, its performance is not very different in reality, the key is to correctly handle the semantic requirements. Sometimes you can also use the look-up table method.
For recursive algorithms, the best way to improve performance is to cache the results of the last execution and directly refer to the results during the next recursion instead of starting from scratch.
V strings and regular expressions
(I’m not particularly familiar with regular expressions, so this chapter jumps)
VI. Quick Response User Interface
The first five chapters are all aimed at the native syntax analysis performance of JS. Starting from this chapter, we will analyze the perceived performance of the user interface.
Because the browser is single-threaded and cannot handle js events when handling UI events, and vice versa, for js tasks that take too long, the timer method can be used to make it give up thread control and let the browser handle UI events first to improve user experience.
web workerAllowing multiple threads means that js tasks that take a long time and consume a lot of performance can be put into
web workerWithout blocking the execution of browser UI threads. It is worth noting that,
web workerUnable to use browser-related resources, it cannot be used for dom operations, etc.
ajaxTechnology is now the mainstream technology, so there is no need to repeat it here. Most of the books on performance optimization are focused on browser resource cache. If the caching mechanism of the browser can be effectively utilized, the interaction with the server can be greatly reduced and the performance can be improved.
What is not mentioned in the book is that it is gradually becoming popular now.
fetch APIThe problem of performance in this area is also worthy of our study.
The rest is all about programming practices, code optimization, etc.
In today’s front-end development field, the online codes are generally merged and compressed, and the server starts gzip and other work. With the development of http2, the performance of web pages will be improved even more. Perhaps the traditional “file merging” work will be gradually abandoned. In addition, http2 server push can also greatly improve the page loading speed. This part of the content is in my other article.In-depth Study: What Is the Real Performance of HTTP2For detailed research, interested readers can go and have a look.