After reading this article, you can also implement a 360-degree panorama plug-in


This article starts from the drawing foundation and introduces how to use it in detail.Three.jsDevelop a full-featured panorama plug-in.

Let’s look at the plug-in effect first:

If you are rightThree.jsYou are already familiar with it, or you want to skip the basic theory, then you can directly learn from it.Panoramic previewStarting to look.

For this projectgithubAddress: …

First, clarify the relationship

1.1 OpenGL

OpenGLIs used for rendering2D、3DCross-language and Cross-platform Application Programming Interface for Quantitative Graphics(API).

This interface is from near350It consists of 3 different function calls, which are used to draw complex 3-D scenes from simple graphic bits.

OpenGL ESYesOpenGLThree-dimensional figureAPIA subset of, for mobile phones,PDAAnd the like.

Based onOpenGL, commonly usedCOrCppDevelopment is not very friendly to front-end developers.

1.2 WebGL

WebGLTheJavaScriptAndOpenGL ES 2.0Together, it provides use for front-end developers.JavaScriptwrite3DThe ability to achieve results.

WebGLForHTML5 CanvasProvide hardware3DSpeed up rendering, soWebDevelopers can use the system graphics card to display more smoothly in the browser.3DScenes and models can be created, and complex navigation and data visualization can also be created.

1.3 Canvas

CanvasIs a rectangular area that can be freely sized and can be passed throughJavaScriptCan operate on rectangular areas, can freely draw graphics, text, etc.

General useCanvasThey all use it.2dThecontextFunction, proceed2dDrawing, this is its own ability.

In contrast to this,WebGLIt is three-dimensional and can be drawn3DGraphics,WebGLTo render on a browser, it must need a carrier, which isCanvas, different from the previous2dcontext, can also be fromCanvasGet fromwebglcontext.

1.4 Three.js

Let’s take it literally:ThreeRepresentative3D,jsRepresentativeJavaScriptThat is, useJavaScriptTo develop3DEffect.

Three.jsIs to useJavaScriptYesWebGLInterface for encapsulation and simplification to form an easy to use3DLibrary.

Direct useWebGLThe development cost is relatively high for developers. It requires you to master more knowledge of computer graphics.

Three.jsTo some extent, it simplifies some norms and concepts that are difficult to understandAPISimplification has been made, which greatly reduces the cost of learning and developing three-dimensional effects.

Let’s take a look at the use ofThree.jsKnowledge that must be known.

Ii. basic knowledge of Three.js

UseThree.jsDrawing a three-dimensional effect requires at least the following steps:

  • Create a scene that accommodates three dimensions-Sence
  • Add the elements to be drawn into the scene, and set the shape, material, shadow, etc. of the elements.
  • Given the location of an observation scene and the observation angle, we use camera objects (Camera) to control
  • Use renderer (Renderer) are rendered and finally rendered on the browser

If you take a movie as an analogy, the scene corresponds to the entire set space. The camera is the shooting lens, and the renderer is used to convert the shot scene into film.

2.1 scenario

The scene allows you to set which objects arethree.jsRendering and where to render.

We place objects, lights and cameras in the scene.

It’s very simple, just create oneSceneAn instance of.

_scene = new Scene();

2.2 element

With the scene, we need what should be shown in the scene next.

A complex three-dimensional scene is often built up of many elements, which may be some custom geometry (Geometry), or externally imported complex models.

Three.jsIt has provided us with a lot.Geometry, for exampleSphereGeometry(sphere),TetrahedronGeometry(tetrahedron),TorusGeometry(Torus) and so on.

InThree.js, material (Material) determines in what form geometric figures are displayed. It includes attributes other than how a geometry is shaped, such as color, texture, transparency, etc.MaterialAndGeometryThey are complementary and must be used in combination.

The following code creates a cuboid and gives it a basic mesh material (MeshBasicMaterial)

var geometry = new THREE.BoxGeometry(200, 100, 100);
 var material = new THREE.MeshBasicMaterial({ color: 0x645d50 });
 var mesh = new THREE.Mesh(geometry, material);

Seeing geometry from this angle is actually the credit of the camera, which weThe following chaptersAgain, this allows us to see the outline of a geometric body, but it feels strange. It is not like a geometric body. In fact, we need to add light and shadow to it, which will make the geometric body look more real.

Basic mesh material (MeshBasicMaterial) Not affected by light, it will not produce shadows. Let’s change the geometry into a light-affected material: grid standard material (Standard Material), and add some light to it:

var geometry = new THREE.BoxGeometry(200, 100, 100);
var material = new THREE.MeshStandardMaterial({ color: 0x645d50 });
var mesh = new THREE.Mesh(geometry, material);
//Create Parallel Light-Illuminates Geometry
var directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.set(-4, 8, 12);
//Create Ambient Light
var ambientLight = new THREE.AmbientLight(0xffffff);

With the rendering of light, the geometry looks more interesting.3DEffect,Three.jsThere are many kinds of light sources, we use ambient light (AmbientLight) and parallel light (DirectionalLight)。

Ambient light will color render all items in the scene.

Parallelism can be thought of as the light from far away into the scene like sunlight. It has directivity and can also activate the reflection effect of objects on light.

In addition to these two lights,Three.jsSeveral other light sources are also provided, which are suitable for rendering different materials under different conditions and can be selected according to actual conditions.

2.3 coordinate system

Before talking about cameras, let’s first understand the concept of coordinate system:

In the three-dimensional world, coordinates define the position of an element in the three-dimensional space, and the origin of the coordinate system is the datum point of the coordinate.

Most commonly, we use three lengths from the origin (distancexAxis, distanceyAxis, distancezAxis) to define a position, this is the rectangular coordinate system.

When determining the coordinate system, we usually use the thumb, index finger and middle finger, and they are each other90Degree. Thumb representativeXAxis, index finger representsYAxis, middle finger representsZAxis.

This results in two coordinate systems: the left-hand coordinate system and the right-hand coordinate system.

Three.jsThe coordinate system used in is the right-hand coordinate system.

We can add a coordinate system to our scene so that we can clearly see where the elements are:

var axisHelper = new THREE.AxisHelper(600);

In which red representsXAxis, green representsYAxis, blue representsZAxis.

2.4 camera

If you don’t create a camera (Camera), you can’t see anything because the default observation point is at the origin of the coordinate axis, which is inside the geometry.

Camera (Camera) specifies where we observe this three-dimensional scene and at what angle.

2.4.1 Difference between Two Cameras

At presentThree.jsThere are several different cameras available. The most commonly used and the two cameras used in the following plug-ins are:PerspectiveCamera(perspective camera),OrthographicCamera(Orthogonal Projection Camera).

The above figure clearly explains the difference between the two cameras:

On the right isOrthographicCamera(Orthographic Projection Camera) He does not have perspective effect, that is, the size of the object is not affected by distance, which corresponds to the orthogonal projection in the projection. Most of the geometry drawn in our math textbook uses this projection.

On the left isPerspectiveCamera(perspective camera), this is in line with our normal vision, near large and far small, corresponding to the perspective projection in the projection.

If you want to make the scene look more real and stereoscopic, then perspective camera is the most suitable, if there are some elements in the scene that you don’t want to let it zoom in and out with distance, then orthogonal projection camera is the most suitable.

2.4.2 Structural Parameters

Let’s look at the two parameters needed to create two cameras separately:

_camera = new OrthographicCamera(left, right, top, bottom, near, far);

OrthographicCameraReceiving six parameters,left, right, top, bottomRespectively corresponding to a distance of up, down, left, right, far and near, elements beyond these distances will not appear in the field of view and will not be drawn by the browser. In fact, these six distances form a cube, soOrthographicCameraThe visual range of is always within this cube.

_camera = new PerspectiveCamera(fov, aspect, near, far);

PerspectiveCameraReceiving four parameters,nearfarSame as above, corresponding to the farthest and closest distance that the camera can observe respectively;fovRepresents an observable angle in the horizontal range,fovThe larger, the wider the horizontal range can be observed;aspectRepresents the ratio of observable distances in horizontal and vertical directions, sofovAndaspectThe range that can be observed in the vertical range can be determined.

2.4.3 position、lookAt

There are two more points about cameras that must be known. One ispositionProperty, one islookAtFunctions:

positionProperty specifies the location of the camera.

lookAtThe function specifies the direction of camera observation.

In factpositionThe sum of the values oflookAtAll parameters received are of typeVector3The object of, this object is used to represent the coordinates in three-dimensional space, it has three properties:x、y、zRespectively represent distancexAxis, distanceyAxis, distancezThe distance between the axes.

Next, let’s point the direction of camera observation to the origin, and let’s separatelyx、y、zIs 0, the other two parameters are not 0, look at what will happen to the field of vision:

_camera = new OrthographicCamera(-window.innerWidth / 2, window.innerWidth / 2, window.innerHeight / 2, -window.innerHeight / 2, 0.1, 1000);
_camera.lookAt(new THREE.Vector3(0, 0, 0))

_camera.position.set(0, 300, 600);  // 1-x is 0

_camera.position.set(500, 0, 600);  // 2-y is 0

_camera.position.set(500, 300, 0);  // 3-z is 0

Clearly seepositionIt determines the starting point of our vision, but the direction the lens points to is the same.

Next we willpositionFixed, change the camera observation direction:

_camera = new OrthographicCamera(-window.innerWidth / 2, window.innerWidth / 2, window.innerHeight / 2, -window.innerHeight / 2, 0.1, 1000);
 _camera.position.set(500, 300, 600);
 _ camera.lookat (newthree.vector3 (0,0,0))//1-field of view points to origin
 _ camera.lookat (newthree.vector3 (200,0,0))//2-field of view is biased towards x-axis

Visible: Our starting point of vision is the same, but the direction of vision has changed.

2.4.4 Comparison of Two Cameras

Ok, with the above foundation, let’s write two more examples to take a look at the perspective comparison of the two cameras. For the convenience of viewing, we create two geometry bodies with different positions:

var geometry = new THREE.BoxGeometry(200, 100, 100);
var material = new THREE.MeshStandardMaterial({ color: 0x645d50 });
var mesh = new THREE.Mesh(geometry, material);

var geometry = new THREE.SphereGeometry(50, 100, 100);
var ball = new THREE.Mesh(geometry, material);
ball.position.set(200, 0, -200);

Orthographic projection camera field of view:

_camera = new OrthographicCamera(-window.innerWidth / 2, window.innerWidth / 2, window.innerHeight / 2, -window.innerHeight / 2, 0.1, 1000);
 _camera.position.set(0, 300, 600);
 _camera.lookAt(new THREE.Vector3(0, 0, 0))

Perspective camera field of view:

_camera = new PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1100);
 _camera.position.set(0, 300, 600);
 _camera.lookAt(new THREE.Vector3(0, 0, 0))

Visible, this confirms our aboveThe Theory of Two Cameras

2.5 renderer

Above we have created scenes, elements and cameras, below we will tell the browser to render these things to the browser.

Three.jsAlso provides us with several different renderers, here we mainly look atWebGLRenderer (WebGLRenderer)。 As the name implies:WebGLRenderer usageWebGLTo draw the scene, it is enough to useGPUHardware acceleration to improve rendering performance.

_renderer = new THREE.WebGLRenderer();

You need to use youThree.jsThe elements drawn are added to the browser, and this process requires a carrier, which we described aboveCanvas, you can through_renderer.domElementGet thisCanvasAnd give it to realityDOMChina.

_container = document.getElementById('conianer');

UsesetSizeThe function sets the range that you want to render, in fact it changes the aboveCanvasScope of:

_renderer.setSize(window.innerWidth, window.innerHeight);

Now, you have specified a rendering vector and a range of vectors that you can userenderFunction to render the scene and camera specified above:

_renderer.render(_scene, _camera);

In fact, if you execute the above code in sequence, it is possible that the screen is still a dark one and there are no elements rendered.

This is because the above elements that you want to render may not have been loaded, and you performed the rendering only once. At this time, we need a method to render the scene and camera in real time. We need the following method:

2.6 requestAnimationFrame

window.requestAnimationFrame()Tell the browser-you want to execute an animation and ask the browser to call the specified callback function to update the animation before the next redraw.

This method needs to pass in a callback function as a parameter, which will be executed before the browser’s next redraw.


If you want to continue updating the next animation frame before the browser redraws next time, the callback function itself must be called again.window.requestAnimationFrame().

The user Korean function means that you can use therequestAnimationFrameThe browser will know what it needs to render in real time by continuously executing the rendering operation.

Of course, sometimes you no longer need to draw in real time, you can also use itcancelAnimationFrameStop this drawing immediately:


Let’s look at a simple example:

var i = 0;
var animateName;
function animate() {
animateName = requestAnimationFrame(animate);
if (i > 100) {

Let’s look at the implementation effect:

We userequestAnimationFrameAndThree.jsThe renderer of the is used in combination, so that the 3D animation can be drawn in real time:

function animate() {
 _renderer.render(_scene, _camera);

With the above code, we can simply achieve some animation effects:

var y = 100;
 var option = 'down';
 function animateIn() {
 animateName = requestAnimationFrame(animateIn);
 mesh.rotateX(Math.PI / 40);
 if (option == 'up') {
 ball.position.set(200, y += 8, 0);
 } else {
 ball.position.set(200, y -= 8, 0);
 if (y < 1) { option = 'up';  }
 if (y > 100) { option = 'down' }

2.7 Summary

The above knowledge isThree.jsThe most basic knowledge is also the most important and backbone.

This knowledge can give you some ideas when you see a complex 3-D effect. Of course, it needs a lot of details to realize it. You can go to these details.Official documentsFor more information.

The following chapter tells you how to use itThree.jsCarry out actual combat-implement a 360-degree panorama plug-in.

This plug-in includes two parts. The first part is to preview the panorama.

The second part is to configure the markers of the panorama and correlate the preview coordinates.

Let’s first look at the panorama preview section:

III. Panoramic Preview

3.1 Basic Logic

  • Wrap a panorama around the inner wall of the sphere.
  • Set an observation point at the center of the ball.
  • Using the mouse, we can drag the sphere to change our view of the panorama.
  • The mouse wheel can be zoomed and zoomed in to change the distance of viewing the panorama.
  • According to the coordinates, mount some marks on the panorama, such as characters, icons, etc., and add events, such as click events.

3.2 initialization

Let’s build the necessary infrastructure first:

Scene, camera (select a distant view camera, which can make the panorama look more realistic), renderer:

_scene = new THREE.Scene();

//initialize camera
function initCamera() {
_camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1100);
_camera.position.set(0, 0, 2000);
_camera.lookAt(new THREE.Vector3(0, 0, 0));

//Initialize renderer
function initRenderer() {
_renderer = new THREE.WebGLRenderer();
_renderer.setSize(window.innerWidth, window.innerHeight);
_container = document.getElementById('panoramaConianer');

//Real-time rendering
function animate() {
_renderer.render(_scene, _camera);

Let’s add a sphere to the scene and wrap the panorama on the sphere as material:

var mesh = new THREE.Mesh(new THREE.SphereGeometry(1000, 100, 100),
new THREE.MeshBasicMaterial(
{ map: ImageUtils.loadTexture('img/p3.png') }

Then the scene we saw should look like this:

This is not the effect we want, what we want is to look at the panorama from the inside of the ball, and the panorama is attached to the inner wall of the outer ball, not spread on the outside:

We just need toMaterialThescaleIf one of the properties of is set to a negative value, the material can be attached to the inside of the geometry:

mesh.scale.x = -1;

Then we move the camera’s center point to the center of the ball:

_camera.position.set(0, 0, 0);

Now we are inside the panorama ball:

3.3 event handling

The panorama can already be browsed, but you can only see the one in front of you, and you cannot drag it to see other parts. In order to accurately control the dragging speed, zoom, zoom and other scenes, we manually add some events to it:

Monitor the mousemousedownEvent, at which point drag marks will begin_isUserInteractingSet totrueAnd record the starting screen coordinates and the starting cameralookAtThe coordinates of.

_container.addEventListener('mousedown', (event)=>{
_isUserInteracting = true;
_onPointerDownPointerX = event.clientX;
_onPointerDownPointerY = event.clientY;
_onPointerDownLon = _lon;
_onPointerDownLat = _lat;

Monitor the mousemousemoveEvents, when_isUserInteractingFortrue, the current camera is calculated in real timelookAtThe real coordinates of.

_container.addEventListener('mousemove', (event)=>{
 if (_isUserInteracting) {
 _lon = (_onPointerDownPointerX - event.clientX) * 0.1 + _onPointerDownLon;
 _lat = (event.clientY - _onPointerDownPointerY) * 0.1 + _onPointerDownLat;

Monitor the mousemouseupEvents that will_isUserInteractingSet tofalse.

_container.addEventListener('mouseup', (event)=>{
 _isUserInteracting = false;

Of course, we just changed the coordinates above and didn’t tell the camera that it changed. We wereanimateFunction to do this:

function animate() {
_renderer.render(_scene, _camera);
_renderer.render(_sceneOrtho, _cameraOrtho);

function calPosition() {
_lat = Math.max(-85, Math.min(85, _lat));
var phi = tMath.degToRad(90 - _lat);
var theta = tMath.degToRad(_lon); = _pRadius * Math.sin(phi) * Math.cos(theta); = _pRadius * Math.cos(phi); = _pRadius * Math.sin(phi) * Math.sin(theta);

MonitormousewheelEvent, zoom in and out of the panorama, note that the maximum zoom range is specified heremaxFocalLengthAnd a minimum zoom rangeminFocalLength.

_container.addEventListener('mousewheel', (event)=>{
 var ev = ev || window.event;
 var down = true;
 var m = _camera.getFocalLength();
 down = ev.wheelDelta ?   ev.wheelDelta < 0 : ev.detail > 0;
 if (down) {
 if (m > minFocalLength) {
 m -= m * 0.05
 } else {
 if (m < maxFocalLength) {
 m += m * 0.05

Let’s take a look at the effect:

3.4 Add Marker

When browsing panorama, we often need to mark some special positions, and these marks may be accompanied by some events, such as you need to click on a mark to reach the next panorama.

Let’s look at how to add tags to the panorama and how to add events to these tags.

We may not need to enlarge or reduce these marks with the change of field of vision. Based on this, we use an orthogonal projection camera to display the marks, and only need to give it a fixed observation height:

_cameraOrtho = new THREE.OrthographicCamera(-window.innerWidth / 2, window.innerWidth / 2, window.innerHeight / 2, -window.innerHeight / 2, 1, 10);
 _cameraOrtho.position.z = 10;
 _sceneOrtho = new Scene();

Using elven materials (SpriteMaterial) to implement text marking, or picture marking:

//Create Text Marker
 function createLableSprite(name) {
 const canvas = document.createElement('canvas');
 const context = canvas.getContext('2d');
 const metrics = context.measureText(name);
 const width = metrics.width * 1.5;
 Font = "10px song style";
 context.fillStyle = "rgba(0,0,0,0.95)";
 context.fillRect(2, 2, width + 4, 20 + 4);
 context.fillText(name, 4, 20);
 const texture = new Texture(canvas);
 const spriteMaterial = new SpriteMaterial({ map: texture });
 const sprite = new Sprite(spriteMaterial); = name;
 const lable = {
 name: name,
 canvas: canvas,
 context: context,
 texture: texture,
 sprite: sprite
 return lable;
 //Create picture tags
 function createSprite(position, url, name) {
 const textureLoader = new TextureLoader();
 const ballMaterial = new SpriteMaterial({
 map: textureLoader.load(url)
 const sp = {
 pos: position,
 name: name,
 sprite: new Sprite(ballMaterial)
 sp.sprite.scale.set(32, 32, 1.0); = name;
 return sp;

After creating these markers, we render them into the scene.

We must tell the scene where these marks are located. In order to understand intuitively, we need to give these marks a kind of coordinate, which is very similar to latitude and longitude. We call itlonAndlat, specific is how to give us in the following chapters:Panoramic markerIt will be described in detail in.

In this process, a total of two coordinate transformations have taken place:

First conversion: converting “latitude and longitude” into three-dimensional space coordinates, that is, the kind we mentioned abovex、y、zThe coordinates of the form.

UsegeoPosition2WorldFunction to get aVector3Object, we can put the current camera_cameraThe passed into this object as a parameterprojectMethod, this will get a standardized coordinate, based on this coordinate can help us to judge whether the mark is in the field of vision, such as the following code, if the standardized coordinate is in-1And1Within the scope of the, it will appear in our field of vision, we will accurately render it.

Second conversion: convert the three-dimensional space coordinates into screen coordinates.

If we talk directly about the above three-dimensional coordinates applied to the mark, we will find that no matter how the field of vision moves, the position of the mark will not change, because the coordinates thus calculated will always be a constant.

Therefore, we need to use the above standardized coordinates to convert the marked three-dimensional space coordinates into real screen coordinates. This process is as followsworldPostion2ScreenFunction.

AboutgeoPosition2WorldAndworldPostion2ScreenIf you are interested in the implementation of the two functions, you can go to mine.githubIf you look at the source code, there will not be much explanation here, because it will involve a lot of professional knowledge.

var wp = geoPosition2World(_sprites.lon,;
 var sp = worldPostion2Screen(wp, _camera);
 var test = wp.clone();
 if (test.x > -1 && test.x < 1 && test.y > -1 && test.y < 1 && test.z > -1 && test.z < 1) {
 _sprites[i].sprite.scale.set(32, 32, 32);
 _sprites[i].sprite.position.set(sp.x, sp.y, 1);
 }else {
 _sprites[i].sprite.scale.set(1.0, 1.0, 1.0);
 _sprites[i].sprite.position.set(0, 0, 0);

Now that the tag has been added to the panorama, let’s add a click event to it:

Three.jsNot separately provided asSpriteTo add events, we can use ray projectors (Raycaster) to achieve.

RaycasterProvides mouse pick-up capability:

viasetFromCameraFunction to establish the binding relationship between the coordinates of the current click (normalized) and the camera.

viaintersectObjectsTo determine which of a group of objects have been hit (clicked) to obtain an array of hit objects.

In this way, we can get the clicked object and do some processing based on it:

_container.addEventListener('click', (event)=>{
_mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
_mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
_raycaster.setFromCamera(_mouse, _cameraOrtho);
var intersects = _raycaster.intersectObjects(_clickableObjects);
intersects.forEach(function (element) {
Alert ("clicked:";

Click on a marker to enter the next panorama:

IV. Panoramic Marking

In order to let the panorama know where I want to mark the mark, I need a tool to associate the original image with the position on the panorama:

Because this part of the code andThree.jsIt doesn’t matter much. I’ll only talk about the basic implementation logic here. If you are interested, you can go to mine.githubWarehouse check.

4.1 Requirements

  • The mapping relationship between coordinates and panorama is established to give panorama a set of virtual coordinates
  • On a tiled panorama, marks can be added at any position and the coordinates of the marks can be obtained.
  • Use coordinates to add a marker to the preview panorama, and see the marker at the same position as in the tiled panorama.

4.2 coordinates

In2DOn the plane, we can monitor the mouse events on the screen. All we can get is the current mouse coordinates. What we have to do is convert the mouse coordinates into three-dimensional space coordinates.

It seems impossible. How can two-dimensional coordinates be converted into three-dimensional coordinates?

However, we can use an intermediate coordinate to convert, which can be called “latitude and longitude”.

Before that, let’s look at what latitude and longitude we often say.

4.3 longitude and latitude

Using latitude and longitude, you can accurately locate any point on the earth. Its calculation rule is as follows:

Usually, the line connecting the south pole to the north pole is called meridian, and its corresponding surface is called meridian plane. It is stipulated that the meridian on the original site of Greenwich Observatory in London, England is called 0 meridian, and its corresponding surface is also called primary meridian plane.

Longitude: the included angle between the meridian plane corresponding to a shop on the sphere and the original meridian plane. East is positive and west is negative.

Latitude: the angle between the normal of a point on the sphere (the normal of the surface tangent to the sphere with the store as the tangent point) and the equatorial plane. North is positive and south is negative.

As a result, every point on the earth can be mapped to a longitude and latitude. If you want to, you can also map to a longitude and latitude.

In this way, even if the spherical surface is expanded into a plane, we can still use latitude and longitude to indicate the location of a store point:

4.4 Coordinate Transformation

Based on the above analysis, we can give a virtual “latitude and longitude” to the plane panorama. We useCanvasDraw a “graticule” for it:

Convert mouse coordinates to “latitude and longitude”:

function calLonLat(e) {
var h ="px")[0];
var w ="px")[0];
var ix = _setContainer.offsetLeft;
var iy = _setContainer.offsetTop;
iy = iy + h;
var x = e.clientX;
var y = e.clientY;
var lonS = (x - ix) / w;
var lon = 0;
if (lonS > 0.5) {
lon = -(1 - lonS) * 360;
} else {
lon = 1 * 360 * lonS;
var latS = (iy - y) / h;
var lat = 0;
if (latS > 0.5) {
lat = (latS - 0.5) * 180;
} else {
lat = (0.5 - latS) * 180 * -1
lon = lon.toFixed(2);
lat = lat.toFixed(2);
return { lon: lon, lat: lat };

In this way, a certain point on the plane map can be associated with the three-dimensional coordinates. Of course, this also requires some conversion. If you are interested, you can go to the source code research.geoPosition2WorldAndworldPostion2ScreenTwo functions.

V. Plug-in Package

In the above code, we have implemented the functions of panorama preview and panorama marking. Next, we will package these functions into plug-ins.

The so-called plug-in can directly refer to the code you write and add a small amount of configuration to achieve the desired function.

5.1 Panoramic Preview Package

Let’s look at which configurations can be extracted:

var options = {
container: 'panoramaConianer',
url: 'resources/img/panorama/pano-7.jpg',
lables: [],
widthSegments: 60,
heightSegments: 40,
pRadius: 1000,
minFocalLength: 1,
maxFocalLength: 100,
sprite: 'label',
onClick: () => { }
  • container:domContainerid
  • url: picture path
  • lables: tag array in panorama, format{position:{lon:114,lat:38},logoUrl:'lableLogo.png',text:'name'}
  • widthSegments: Number of horizontal segments
  • heightSegments: Number of vertical segments (small value, fast rough speed, large value, slow fine speed)
  • pRadius: radius of panoramic ball, default value is recommended.
  • minFocalLength: Minimum zoom-in distance of lens
  • maxFocalLength: Maximum zoom distance of lens
  • sprite: The type of markup displayedlabel,icon
  • onClick: Marked Click Events

The above configuration can be configured by the user, so how should the user pass in the plug-in?

We can declare some default configurations in the plug-inoptions, the user passes in parameters using the constructor, and then uses theObject.assignOverwrite incoming configuration to default configuration.

Next, you can use itthis.defTo access these variables, and then only need to write dead code to these configurations.

options = {
 //Default Configuration  ...
 function tpanorama(opt) {
 tpanorama.prototype = {
 constructor: this,
 def: {},
 render: function (opt) {
 this.def = Object.assign(options, opt);
 //initialization operation  ...

5.2 Panoramic Marking Package

The basic logic is similar to the above, and the following are some extracted parameters.

var setOpt = {
Container:' mydiv','//setting container
imgUrl: 'resources/img/panorama/3.jpg',
Width: '',// Specify width, height adaptive
ShowGrid: true,// show grid
ShowPosition: true,// whether to display longitude and latitude prompt
LableColor: '#9400D3',// marking color
GridColor: '#48D1CC',// gridcolor
Lables: [],// tag {lon:114,lat:38,text:' tag one'}
Add label: true,//double-click to add a label after opening (longitude and latitude prompt must be turned on)
GetLable: true,///right-click the query mark after opening (longitude and latitude prompt must be turned on)
DeleteLbale: true,// turn on default middle key deletion (longitude and latitude prompt must be turned on)

VI. Release

Next, we will consider how to use the written plug-in for users.

We mainly consider two scenarios, direct reference andnpm install

6.1 Direct ReferenceJS

In order not to pollute global variables, we use a self-executing function(function(){}())Wrap up the code and expose the plug-in we have written to the global variables.window.

I put it onoriginSrcUnder the directory.

(function (global, undefined) {
 function tpanorama(opt) {
 //   ...
 tpanorama.prototype = {
 //   ...
 function tpanoramaSetting(opt) {
 //   ...
 tpanoramaSetting.prototype = {
 //   ...
 global.tpanorama = tpanorama;
 global.tpanoramaSetting = panoramaSetting;

6.2 Usenpm install

Directly export the written plug-in:

module.exports = tpanorama;
 module.exports = panoramaSetting;

I put it onsrcUnder the directory.

At the same time, we will putpackage.jsonhit the targetmainProperty points to the file we want to export:"main": "lib/index.js", and thennamedescriptionversionSuch as information supplement is complete.

Next, we can start publishing. First, you need to have onenpmAccount number, and login, if you do not have an account number, use the following command to create an account number.

npm adduser --registry

If you already have an account, you can log in directly using the following command.

npm login --registry

After successful login, you can publish:

npm publish --registry

Note that I have manually specified each command above.registryThis is because you are currently usingnpmThe source may have been replaced. Taobao source or company source may be used. Failure to specify manually at this time will result in publishing failure.

After the success of the release directly inNpm websiteI saw your bag in the newspaper.

Then, you can use it directlynpm install tpanoramaInstall and then use:

var { tpanorama,tpanoramaSetting } = require('tpanorama');

6.3 babel compilation

Finally, don’t forget, no matter which of the above methods is used, we must use it.babelOnly after compilation can it be exposed to users.

InscriptsCreate one inbuildCommand to compile the source file and finally expose it to the userlibAndorigin.

"build": "babel src --out-dir lib && babel originSrc --out-dir origin",

You can also specify some other commands for the user to test, for example, I will write all the examples inexamplesIn, and then inscriptsDefinedexpamleCommand:

"example": "npm run webpack && node ./server/www"

In this way, the user will clone the code and run it directly locally.npm run exampleWe can start debugging.

Vii. summary

For this projectgithubAddress: …

If there are any mistakes in the article, please correct them in the comment area. If this article helps you, please comment and pay attention.

If you want to read more quality articles, please pay attention to me.Github blogYour star✨, praise and attention are the driving force of my continuous creation!

It is recommended to pay close attention to my WeChat public number “code Secret Garden” and push high-quality articles every day so that we can communicate and grow together.