Thanks, everybody. All very relevant and helpful comments.
To understand the true limitations of the current approach, I'm running a brand new capture set of 500K samples with a slightly enhanced morph library.
The process is semi-automated and will take about 4 days (and nights) to finish.
Also, the Model has been updated to a Dual KNN architecture.
The morph mapper has been split into two separate models:
A single combined model trained on both front and side features was dominated by the 21 front measurements, effectively drowning out the 7 side measurements.
Separating them gives the side view its own dedicated model with full weight on depth features, rather than competing against the front measurements in a shared feature space.
At inference time, both models run independently, and their predictions are blended, with the front model weighted slightly higher.
This gives the generator a genuine understanding of facial depth rather than approximating it from the front view alone.
The result is noticeably more accurate profile silhouettes.
I should be able to present some examples to you within a week.
To understand the true limitations of the current approach, I'm running a brand new capture set of 500K samples with a slightly enhanced morph library.
The process is semi-automated and will take about 4 days (and nights) to finish.
Also, the Model has been updated to a Dual KNN architecture.
The morph mapper has been split into two separate models:
- Front model — trained on front-facing captures, handles overall face shape, eye spacing, nose width, jaw, brows, etc.
- Side model — trained on side-view captures, handles depth-related features, nose projection, lip depth, chin projection, forehead slope, etc.
A single combined model trained on both front and side features was dominated by the 21 front measurements, effectively drowning out the 7 side measurements.
Separating them gives the side view its own dedicated model with full weight on depth features, rather than competing against the front measurements in a shared feature space.
At inference time, both models run independently, and their predictions are blended, with the front model weighted slightly higher.
This gives the generator a genuine understanding of facial depth rather than approximating it from the front view alone.
The result is noticeably more accurate profile silhouettes.
I should be able to present some examples to you within a week.