Graphic engine "Hummingbird".
Graphic engine "Hummingbird".
An attempt to create a graphics engine and integrate some developments into it
Initially, when the first hack was invented, indicating the possibility of quickly discarding half of the shadow polygons from the direction of the camera, it was already clear that there would be distortions on the map-1, that there would be only parallels, and there would be no medians on the map, since it would not be responsible only for landmarks. There are only 2 axes on this 3D map, since they are responsible for the position and rotation of the camera around its own Y axis - not interesting, since the task is to make a gross rough discard of half of the possible shadow polygons not by calculation, but by the map.
It is possible that later the rotation of the camera around its own axis will be considered, but not at this stage - certainly not in the algorithm for gross selection of shadow polygons from the camera landmark, and not at the stage of this particular map being created.
An ordinary icosahedron is taken as an example.
Since this is a hack "from the inside", then all polygon normals can be rotated inward for convenience (in the image these are cones directed inward)
Further, it turns out something like a globe of the starry sky.
The camera presumably looks straight up, since the normals are inverted, then light cones mean possible visible normals of the mesh's edges, and below the red surface - dark ones, which are in the shadow of a possible camera view.
It is worth explaining that on this map, the coordinates of objects do not matter, and that on it the normals of all objects start at one point.
And then the next possible grid is considered, on which the camera's view should cover all the points marked on it, the normals of the faces or vertices oriented from the camera at an angle not more than 90 degrees.
Rotate around the Z axis: left-right
Rotation around the X-axis: up-down
And it turns out that the X axis should be the slave when the camera is rotated around the Z axis.
It is clear that if the map is turned into a plane, then when rotating around the X axis, transitions and distortions of the camera capture area will be observed. Yes, even this can be seen in the last image, if you mentally begin to rotate the plane along the X-axis (red horizontal parallels). This is the topic of work tomorrow evening, the engine is done in your free personal time.
And then, as it were, the entire surface of the sphere, with the normal points, is transferred to the surface of the cylinder - some semblance of a map.
And the algorithm for selecting areas from a cylindrical map is quite complicated, but in view of the conditional scale of the problem being solved, you can accept this and use the map and algorithms in a number of tasks. Here, the camera conventionally looks up to the left, and the section of the cylinder section is a complex figure, and if you tilt the camera down, then the bends of the section should bend in the opposite directions.
And now - yes, you can proceed to considering such a subject as isometry, and to writing formulas.
The image was correct, just with a slight flaw.
The formulas will be built as follows: the map is the surface of a cylinder, has a height equal to the height of the sphere conditionally, the limiters are disks, they have a position along the Z axis exactly the same as the upper and lower points of intersection of the camera's "matrix" plane with the sphere.
How can a program, with its limitations, quickly survey the map ... There are many options, I made two, where each
two arrays, where in each cell, in addition to the data field, either the number of steps to it or the address of the next cell is indicated.
In fact, the correct name for this method is: Trace along a transition line, with an arbitrary entry point.
Not a technical name from me (as well as a technical one) -trace "Stalker".
After digressing about tracing for working with the map, it is worth returning to the mention of it. The surface of the cylinder is the future polygon normal map, which is projected from the upper and lower "poles" of the sphere, and perhaps it will also be a vertex map, no matter how many of these maps there will be.
The red disc is a roughly assumed surface for projection of the image onto the camera. The green lower disc and the green upper disc are the limiters that limit the view on the projected "cylindrical" map.
If we take one landmark, then it is obvious that for it are shadow polygons whose normals are rotated from it by an angle in the range of 90-270 degrees.
But the camera has a view, and these normals cannot be rotated through an angle of 180 degrees (90 + 180 = 270), but can be rotated through an angle equal to the difference between the angle of 180 degrees and the angle of view of the camera: 180-63 = 117. It turns out not so much of the rough preliminary division of polygons into visible and invisible.
It goes without saying that the same turn can already be carried out at an angle equal to the difference between the angle of 180 degrees and the angular size of the object. At the same time, taking into account the fact that this is a rough preliminary division of polygons into visible and invisible, the vector from the center of the object to the camera is taken as the zero reference point of the camera, but provided that all the front and shadow surfaces of the polygons were previously inverted (the normals are rotated 180 degrees ).
Here the algorithm for the rough separation of vertices is the same as from the beginning:
polygon normals are inverted inward, and to encompass the visible polygon normals, you need to look at the object from the inside, but in the direction of the camera.
Approximately the same method, with a little adaptation, is applicable to ray tracing. The difference is that the ray reflected from the object on the scene into the camera is calculated for the subject of the incident ray. And so - the same thing, but in krat is faster than raycasting.
The map can be modified for more detailed filtering of raycasting and raytracing by adding an accompanying map to it and the concept (for working with it) of the normal radius.
The positions of the vertices when drawing will be taken from an additional map of the vertices angles. It's just that each vertex will have another landing depth. Better than this map and tracing with it - I can't think of anything new. The trace will simply be read in both directions, from the center of the viewing angle.
I intend to apply the type of this map and this trace to both the virtual space and the camera.
For vertices, the routing will probably be complicated by links along the diagonals of the map, and the links will become an array of links for one vertex.
You can complicate this tracing as much as you like by increasing the amount of data processed with its help.