Physically Based Rendering is the shading model used in most modern games, as the name suggests, it leverages a more physics based approach rather than a more stylised or approximated one. Most importantly, it uses two different channels to control the specularity (shininess) of a surface, but this page will also explore related concepts and other notable texture channels in addition to that as well as offer some external resources for further reading.
Metallic and Roughness/Smoothness splits the concept of specularity (or, the shininess of a surface) into two more specific channels allowing for a much more accurate result.
Metallic surfaces react to light much differently than non-metallic (also called dielectric) surfaces, mostly regarding reflections.
A metallic map tends to be quite binary black and white, surfaces are generally metal or not (with some exceptions).
Roughness maps are grayscale maps which work more like traditional specular maps to represent shiny and matte parts of a surface.
In some engines or tools you may encounter Smoothness or Glossiness instead, which works in the reverse way to Roughness but for the same effect.
Normals are the angle of a surface which light will reflect according to, models contain normals on their faces and at their vertices (which is what allows for both hard and soft edge shading). Surface normals can also be faked using a special texture called a normal map.
This RGB texture is representative of the surface with lights projecting on it along all axis, the shader then uses this texture to fake shading on the surface to create a sense of much more detail than could feasibly be modelled, with no extra tris.
High poly models, usually made with sculpting tools like ZBrush, can be used to create texture channels, such as normal and ambient occlusion maps, for use on low poly versions of the models.
Rendering is a complex topic, but can broadly be put into two categories, forward, and deferred rendering.
The basic concept of forward rendering, is that every object is rendered against every light source from front to back and the images layered over each other. For example, to render this cube and sphere the GPU would draw the cube first with each light, then the sphere, then lay those images all on top of each other something like the sequence below.
You can subvert some aspects of the process, like asking objects to be drawn over other objects (think an objective highlight or seeing an enemy through a wall) quite easily because they just change order, and it handles transparency very well, but it becomes extremely expensive as you have more lights affecting any object and that cost will increase exponentially.
Whereas with deferred rendering, all objects with a similar material are done in a kind of batch process. The material channels for all objects are done together, applied to all lights, and then combined. Because they're drawn simultaneously, this removes the ability to do translucency (in the traditional sense), but lights create a considerably smaller performance impact. More like the sequence below.
Whilst deferred rendering is the newer and more standard approach, which will more often than not give you better results, forward rendering can still be more efficient or effective in some cases and will frequently still be used in the same game, or even the same scene, as deferred rendering as the concepts can be used together.
One way that deferred rendering can mitigate its translucency issues is with a concept called dithering (sometimes called 'screen door transparency' when used in this way). Dithering is actually an incredibly old approach to handling transparency which has seen a resurgence as a way to use deferred rendering and have seemingly translucent objects and can be seen in a lot of newer games. Which also shows that understanding older approaches can lead to new workarounds and developments.