
If you look at the two technologies from the outside Gaussian splatting and photogrammetry, the simple - or shall we say: a little naive - question almost inevitably arises:
What is actually better now?
Of course, we all know that such questions are rarely easy to answer in reality. Nevertheless, it is a legitimate one - after all, why should we even bother with a new technology if it is not superior to the tried and tested in at least certain respects?
It was precisely with this question in mind that we embarked on our little practical experience.
We had the opportunity to digitize a special architectural model, about three meters in size, which shows the never-realized design of a Munich opera house for Richard Wagner (currently on display at the Herrenchiemsee Museum).
A perfect opportunity to combine both methods - photogrammetry, with which we already have a lot of experience, and photogrammetry, which is still quite new to us. Gaussian splatting with the Lixel K1 LiDAR scanner - directly next to each other and hold them honestly against each other.
And yes, we also asked ourselves at the beginning:
Does it even make sense to test this new workflow, or is photogrammetry the safer option in the end anyway?
As is so often the case in such technological comparisons, the answer is:
It depends.
There are strengths and weaknesses on both sides - and it is precisely these that we would like to balance out in detail and in a practical way in this blog post.
Our aim is not to provide a black-and-white answer, but rather a well-founded basis for deciding when which process makes sense for which application.
As usual, we opted for the tried-and-tested photogrammetry approach with over 1,000 images. The recording took about an hour. Thanks to our experience, we knew that we would achieve a solid result with this quantity of photos.
However, the effort involved is considerable - especially on site in a museum, where time is often limited and special consideration must be given to exhibits.
Despite all the care taken, a typical problem emerged during the subsequent evaluation:
Slight distortions and artifacts were particularly noticeable in shaded areas, such as the rear arches of our model. Lines that should actually be straight sometimes appear distorted - almost like in a Dali painting. No wonder, because these areas were in the shadows, where only little image material was available. This reveals a well-known weakness of photogrammetry without additional support point data or point clouds.
The Lixel K1 LiDAR scanner brought an interesting alternative into play. Although the scanner is actually designed for larger rooms and objects, our test showed that it delivered surprisingly good results even with a three-meter tall model.
The shoot took just 15 minutes, including multiple circles and different perspectives from above and below. A particularly positive aspect was that we were able to capture the model from a safe distance without endangering the exhibit.
We were aware that the scanner also captures the surroundings - but the resulting gaps in the room were not a problem for our focus on the model.
An important point:
Although we could theoretically have used a handheld scanner, we didn't have one ourselves.
What's more, even if the model was presented freely in the room on a table and we had physical access to the table, working with a hand scanner would have been very time-consuming and potentially risky.
Because:
Hand-held scanners generally have to be used from a very close distance (less than 20 cm). The roof regions of the model in particular would have been difficult to reach. In order to capture these areas correctly, we would have had to bend far over the model or hold the scanner directly over the sensitive exhibit.
This would not only have been cumbersome, but also dangerous - both for the exhibit and for the stability of the photograph.
In addition, with hand-held scanners there is always the question of how stable the movements can still be carried out in such positions without distorting the result.
It was therefore clear to us: LiDAR with Gaussian splatting appeared to be the most pragmatic, safest and at the same time quickest solution, as we were able to fully capture the model from a distance and without bending over.
One point that is often misunderstood should be made clear at this point:
The Lixel K1 LiDAR scanner does not necessarily generate proprietary Gaussian splatting data. Rather, the result depends on it, which processing workflow and which software is used.
The from Software provided by Lixel offers direct export as standard in formats that can be used very easily for Gaussian splatting - for example for applications in real-time engines or for web-based visualizations.
However, it would be just as possible to Raw data from the Lixel K1 also for classic photogrammetry workflows to use. For example, the LiDAR data would have to be exported as point clouds and combined with photogrammetric methods or converted into polygon models.
This is perfectly feasible - but with a significantly higher effort and corresponding expertise. Specialized software solutions such as Agisoft Metashape or similar can support such workflows, but are not the standard application of the Lixel K1.
For us in this project, the use of Gaussian splatting the most pragmatic wayto achieve fast and high-performance results for a VR and WebAR-compatible visualization to achieve.
Both methods delivered valid results in comparison - each with their own strengths and weaknesses.
The Photogrammetry produced a polygonal model with fine textures, but showed weaknesses in difficult areas (e.g. in shadow zones).
The Lixel K1 with Gaussian splatting delivered a point cloud-based model with a soft, two-dimensional look that is particularly impressive in real time.
An interesting difference was revealed in the amount of data and performance on mobile devices.
The Photogrammetry model was significantly larger overall. In the quality acceptable to us, the file size was around 20 MB.
We have found that a reduction in the number of polygons - for example to optimize for mobile applications - very quickly results in a visible loss of quality, especially with curves such as the archways, which then appear polygonal and lose detail.
In contrast, the Gaussian splatting File with comparable visual quality only about 4 MB in size.
However, the test on a Samsung S20 showed that, despite the smaller file size, the performance with the Gaussian splatting slightly behind the Photogrammetry was left behind.
The frame rate on the device was Gaussian splatting noticeably lower, although still perfectly acceptable and usable for the application.
This is mainly due to the rendering method of Gaussian splattingwhich generates a different GPU load than classic polygonal models due to the splats.
This point should be taken into account on mobile devices in particular, even if the amount of data appears smaller on paper.
An important aspect of our project was the intended use of the 3D scan:
The model was later to be used in a WebAR-application, where users can place the object in the room directly via their smartphone or tablet - without an additional app, purely via the browser.
This means:
File size and performance are critical factors, as the model has to be loaded via the Internet. Here you are forced to find a balance between high quality and the smallest possible amount of data.
Our tests showed:
Both the optimized polygonal model from the Photogrammetry as well as the Gaussian splatting model were successfully implemented in WebAR be integrated.
Both versions could be delivered via WebGL without any problems, whereby we paid particular attention to compactness in both versions.
It was pleasing that the Gaussian splatting model offered a convincing display quality despite its smaller file size and could also be delivered smoothly via WebGL - which is not a matter of course with all modern 3D data formats.
In practical use in WebAR you had to look closely to even notice the differences between the two models.
Both versions were well suited for the application and performed stably, even on normal smartphones like our Samsung S20.
For all those who would like to see for themselves, we have made both versions available in WebAR-applications.
Here you can try out the differences directly in the browser:
➡️ Show photogrammetry model in WebAR
➡️ Show Gaussian splatting model in WebAR
Photogrammetry: tools for better alignment - and their downsides
In order to Photogrammetry to achieve the cleanest possible alignment of the photos, we used a number of tried-and-tested tools in the project:
GPS module on the camera:
Our photographer Johannes Müller used a GPS module mounted on the camera. This meant that position and direction information was written directly into the metadata of the images, which facilitated subsequent alignment in the Photogrammetry software considerably easier.
Marker images (similar to QR codes):
We have also placed printed marker images on and around the exhibit. These markers help the software to correctly assign images, especially for objects with homogeneous surfaces or few distinctive details.
These aids are available in the Photogrammetry common practice - however, they have the disadvantage that the markers remain visible on the textures later and have to be removed manually.
Fortunately, this is relatively easy to solve with polygonal 3D models, as there are established tools such as Substance Painter that can be used to retouch the textures directly on the model.
However, it must be taken into account that the Photogrammetry software UV atlases often appear very complex and confusing, which makes texturing somewhat more demanding than with manually created models with clean UV processing.
The situation is quite different with Gaussian splatting off:
As there are no classic UV textures here, but the color information is stored directly in the individual splats, there are currently no established workflows for simply "painting out" markers or faults.
While it is possible to delete or move splats, there is a lack of tools that enable targeted color correction or retouching of individual splats.
Common editing tools such as PostShot or SuperSplat do not currently offer this functionality.
It was only after lengthy research that we came across a Blender addon called "3DGS Render Addon", which makes it possible to create a 3DGS render directly in Blender. Gaussian splats to edit and paint over.
This plugin ultimately enabled us to remove the marker positions from the splat model - even if the workflow was much more experimental and error-prone than in the classic model. Photogrammetry.
This experience clearly shows:
While the workflow and tool support in the Photogrammetry is mature and professionally standardized, is in the process of being Gaussian splatting still at a very early stage, especially when it comes to editing and retouching.
What is actually better now? Photogrammetry or Gaussian splatting?
Our honest answer after this project is:
For us it was Gaussian splatting the better solution.
Why?
Because in our application scenario - the quick capture of an exhibit in the museum for later use in a WebAR-application - two points were particularly important:
Especially when - as in a museum - there is a narrow timeslot available for recording, the speed advantage of Gaussian splatting enormous.
The lower post-processing requirements make Gaussian splatting This is also a much more economical solution when you consider working time and production costs.
Gaussian splatting is not yet a fully-fledged replacement for classic polygonal workflows.
The tool landscape is still young, many applications are still in the beta stage, and editing is currently less flexible than with polygon models.
So if you need data for classic CAD, offline renderings or complex material processing, you'll come up against a real challenge with Gaussian splatting currently still has limits.
Even when integrated into well-known engines such as Unreal or Unity we have found that large Gaussian splatting files are often problematic - crashes or performance problems are unfortunately still commonplace here.
Curiously, our tests with WebGL and browser-based players, which surprised us positively in practice.
We see Gaussian splatting as the more future-proof technology, but believe that you still need to be patient and willing to experiment in order to work productively with it.
Photogrammetry uses classic image data for reconstruction, while Gaussian splatting Point clouds are combined with volumetric representation, which offers advantages in difficult lighting conditions.
In our test, the photogrammetry took about an hour, the LiDAR-Scan with Gaussian splatting on the other hand, only 15 minutes.
Depending on the situation: photogrammetry is more precise with optimum lighting, Gaussian splatting is convincing in shaded areas and areas that are difficult to access.
Yes, hybrid workflows are possible and can make optimum use of the respective strengths of both processes.
Yes, but the tool landscape is still limited. Plugins such as the "3DGS Render Addon" for Blender enable initial editing.
Especially on mobile devices, the GPU load caused by splats rendering can lead to a lower frame rate, even though the files are small.
Are you interested in developing a virtual reality or 360° application? You may still have questions about budget and implementation. Feel free to contact me.
I am looking forward to you
Clarence Dadson CEO Design4real