Getting to the way it's supposed to be!

This commit is contained in:
2024-10-12 00:43:51 +02:00
parent 84729f9d27
commit 8f2dad9cec
2663 changed files with 540071 additions and 14 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,38 @@
# Viewer example
This example contains a small model viewer using [Sokol](https://github.com/floooh/sokol).
## Building
To build the example you need to compile `../../ufbx.c`, `external.c`, and `viewer.c` and link
with the necessary platform libraries.
### Linux
```bash
# Install dependencies if missing (Debian/Ubuntu specific here)
sudo apt install -y libgl1-mesa-dev libx11-dev libxi-dev libxcursor-dev
# Compile and link system libraries
clang -lm -ldl -lGL -lX11 -lXi -lXcursor ../../ufbx.c external.c viewer.c -o viewer
# Run the executable
./viewer /path/to/my/model.fbx
```
### Windows
Create a new Visual Studio solution and add `../../ufbx.c`, `external.c`, and `viewer.c` as source files.
Either build and run from the command line giving the desired model as an argument or
set the command line arguments from the project "Debugging" settings.
## Shaders
The compiled shaders are committed to the repository, so unless modifying `.glsl` files you don't need to do anything.
The shaders are compiled using [sokol-shdc](https://github.com/floooh/sokol-tools/blob/master/docs/sokol-shdc.md),
you can download the prebuilt binaries from [sokol-tools-bin](https://github.com/floooh/sokol-tools-bin).
```bash
# Compile the mesh shader to a header
sokol-shdc --input shaders/mesh.glsl --output shaders/mesh.h --slang glsl330:hlsl5:metal_macos -b
```

View File

@@ -0,0 +1,27 @@
#define SOKOL_IMPL
#if defined(__APPLE__)
#define SOKOL_METAL
#elif defined(_WIN32)
#define SOKOL_D3D11
#elif defined(__EMSCRIPTEN__)
#define SOKOL_GLES2
#else
#define SOKOL_GLCORE33
#endif
#define UMATH_IMPLEMENTATION
#if defined(TEST_VIEWER)
#define DUMMY_SAPP_MAX_FRAMES 64
#include "external/dummy_sokol_app.h"
#include "external/dummy_sokol_time.h"
#include "external/dummy_sokol_gfx.h"
#else
#include "external/sokol_app.h"
#include "external/sokol_time.h"
#include "external/sokol_gfx.h"
#endif
#include "external/sokol_glue.h"
#include "external/umath.h"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,211 @@
#if defined(SOKOL_IMPL) && !defined(SOKOL_TIME_IMPL)
#define SOKOL_TIME_IMPL
#endif
#ifndef SOKOL_TIME_INCLUDED
/*
sokol_time.h -- simple cross-platform time measurement
Project URL: https://github.com/floooh/sokol
Do this:
#define SOKOL_IMPL or
#define SOKOL_TIME_IMPL
before you include this file in *one* C or C++ file to create the
implementation.
Optionally provide the following defines with your own implementations:
SOKOL_ASSERT(c) - your own assert macro (default: assert(c))
SOKOL_TIME_API_DECL - public function declaration prefix (default: extern)
SOKOL_API_DECL - same as SOKOL_TIME_API_DECL
SOKOL_API_IMPL - public function implementation prefix (default: -)
If sokol_time.h is compiled as a DLL, define the following before
including the declaration or implementation:
SOKOL_DLL
On Windows, SOKOL_DLL will define SOKOL_TIME_API_DECL as __declspec(dllexport)
or __declspec(dllimport) as needed.
void stm_setup();
Call once before any other functions to initialize sokol_time
(this calls for instance QueryPerformanceFrequency on Windows)
uint64_t stm_now();
Get current point in time in unspecified 'ticks'. The value that
is returned has no relation to the 'wall-clock' time and is
not in a specific time unit, it is only useful to compute
time differences.
uint64_t stm_diff(uint64_t new, uint64_t old);
Computes the time difference between new and old. This will always
return a positive, non-zero value.
uint64_t stm_since(uint64_t start);
Takes the current time, and returns the elapsed time since start
(this is a shortcut for "stm_diff(stm_now(), start)")
uint64_t stm_laptime(uint64_t* last_time);
This is useful for measuring frame time and other recurring
events. It takes the current time, returns the time difference
to the value in last_time, and stores the current time in
last_time for the next call. If the value in last_time is 0,
the return value will be zero (this usually happens on the
very first call).
uint64_t stm_round_to_common_refresh_rate(uint64_t duration)
This oddly named function takes a measured frame time and
returns the closest "nearby" common display refresh rate frame duration
in ticks. If the input duration isn't close to any common display
refresh rate, the input duration will be returned unchanged as a fallback.
The main purpose of this function is to remove jitter/inaccuracies from
measured frame times, and instead use the display refresh rate as
frame duration.
Use the following functions to convert a duration in ticks into
useful time units:
double stm_sec(uint64_t ticks);
double stm_ms(uint64_t ticks);
double stm_us(uint64_t ticks);
double stm_ns(uint64_t ticks);
Converts a tick value into seconds, milliseconds, microseconds
or nanoseconds. Note that not all platforms will have nanosecond
or even microsecond precision.
Uses the following time measurement functions under the hood:
Windows: QueryPerformanceFrequency() / QueryPerformanceCounter()
MacOS/iOS: mach_absolute_time()
emscripten: performance.now()
Linux+others: clock_gettime(CLOCK_MONOTONIC)
zlib/libpng license
Copyright (c) 2018 Andre Weissflog
This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising from the
use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software in a
product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not
be misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.
*/
#define SOKOL_TIME_INCLUDED (1)
#include <stdint.h>
#if defined(SOKOL_API_DECL) && !defined(SOKOL_TIME_API_DECL)
#define SOKOL_TIME_API_DECL SOKOL_API_DECL
#endif
#ifndef SOKOL_TIME_API_DECL
#if defined(_WIN32) && defined(SOKOL_DLL) && defined(SOKOL_TIME_IMPL)
#define SOKOL_TIME_API_DECL __declspec(dllexport)
#elif defined(_WIN32) && defined(SOKOL_DLL)
#define SOKOL_TIME_API_DECL __declspec(dllimport)
#else
#define SOKOL_TIME_API_DECL extern
#endif
#endif
#ifdef __cplusplus
extern "C" {
#endif
SOKOL_TIME_API_DECL void stm_setup(void);
SOKOL_TIME_API_DECL uint64_t stm_now(void);
SOKOL_TIME_API_DECL uint64_t stm_diff(uint64_t new_ticks, uint64_t old_ticks);
SOKOL_TIME_API_DECL uint64_t stm_since(uint64_t start_ticks);
SOKOL_TIME_API_DECL uint64_t stm_laptime(uint64_t* last_time);
SOKOL_TIME_API_DECL uint64_t stm_round_to_common_refresh_rate(uint64_t frame_ticks);
SOKOL_TIME_API_DECL double stm_sec(uint64_t ticks);
SOKOL_TIME_API_DECL double stm_ms(uint64_t ticks);
SOKOL_TIME_API_DECL double stm_us(uint64_t ticks);
SOKOL_TIME_API_DECL double stm_ns(uint64_t ticks);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif // SOKOL_TIME_INCLUDED
/*-- IMPLEMENTATION ----------------------------------------------------------*/
#ifdef SOKOL_TIME_IMPL
#define SOKOL_TIME_IMPL_INCLUDED (1)
#include <string.h> /* memset */
#ifndef SOKOL_API_IMPL
#define SOKOL_API_IMPL
#endif
extern uint64_t dummy_stm_time_ns;
SOKOL_API_IMPL void stm_setup(void)
{
if (dummy_stm_time_ns == 0) {
dummy_stm_time_ns = 1;
}
}
SOKOL_API_IMPL uint64_t stm_now(void)
{
return dummy_stm_time_ns;
}
SOKOL_API_IMPL uint64_t stm_diff(uint64_t new_ticks, uint64_t old_ticks)
{
return new_ticks - old_ticks;
}
SOKOL_API_IMPL uint64_t stm_since(uint64_t start_ticks)
{
return stm_now() - start_ticks;
}
SOKOL_API_IMPL uint64_t stm_laptime(uint64_t* last_time)
{
uint64_t dt = 0;
uint64_t now = stm_now();
if (0 != *last_time) {
dt = stm_diff(now, *last_time);
}
*last_time = now;
return dt;
}
SOKOL_API_IMPL uint64_t stm_round_to_common_refresh_rate(uint64_t frame_ticks)
{
return frame_ticks;
}
SOKOL_API_IMPL double stm_sec(uint64_t ticks)
{
return (double)ticks * 1e-9;
}
SOKOL_API_IMPL double stm_ms(uint64_t ticks)
{
return (double)ticks * 1e-6;
}
SOKOL_API_IMPL double stm_us(uint64_t ticks)
{
return (double)ticks * 1e-3;
}
SOKOL_API_IMPL double stm_ns(uint64_t ticks)
{
return (double)ticks;
}
#endif /* SOKOL_TIME_IMPL */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,137 @@
#if defined(SOKOL_IMPL) && !defined(SOKOL_GLUE_IMPL)
#define SOKOL_GLUE_IMPL
#endif
#ifndef SOKOL_GLUE_INCLUDED
/*
sokol_glue.h -- glue helper functions for sokol headers
Project URL: https://github.com/floooh/sokol
Do this:
#define SOKOL_IMPL or
#define SOKOL_GLUE_IMPL
before you include this file in *one* C or C++ file to create the
implementation.
...optionally provide the following macros to override defaults:
SOKOL_ASSERT(c) - your own assert macro (default: assert(c))
SOKOL_GLUE_API_DECL - public function declaration prefix (default: extern)
SOKOL_API_DECL - same as SOKOL_GLUE_API_DECL
SOKOL_API_IMPL - public function implementation prefix (default: -)
If sokol_glue.h is compiled as a DLL, define the following before
including the declaration or implementation:
SOKOL_DLL
On Windows, SOKOL_DLL will define SOKOL_GLUE_API_DECL as __declspec(dllexport)
or __declspec(dllimport) as needed.
OVERVIEW
========
The sokol core headers should not depend on each other, but sometimes
it's useful to have a set of helper functions as "glue" between
two or more sokol headers.
This is what sokol_glue.h is for. Simply include the header after other
sokol headers (both for the implementation and declaration), and
depending on what headers have been included before, sokol_glue.h
will make available "glue functions".
PROVIDED FUNCTIONS
==================
- if sokol_app.h and sokol_gfx.h is included:
sg_context_desc sapp_sgcontext(void):
Returns an initialized sg_context_desc function initialized
by calling sokol_app.h functions.
LICENSE
=======
zlib/libpng license
Copyright (c) 2018 Andre Weissflog
This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising from the
use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software in a
product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not
be misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.
*/
#define SOKOL_GLUE_INCLUDED
#if defined(SOKOL_API_DECL) && !defined(SOKOL_GLUE_API_DECL)
#define SOKOL_GLUE_API_DECL SOKOL_API_DECL
#endif
#ifndef SOKOL_GLUE_API_DECL
#if defined(_WIN32) && defined(SOKOL_DLL) && defined(SOKOL_GLUE_IMPL)
#define SOKOL_GLUE_API_DECL __declspec(dllexport)
#elif defined(_WIN32) && defined(SOKOL_DLL)
#define SOKOL_GLUE_API_DECL __declspec(dllimport)
#else
#define SOKOL_GLUE_API_DECL extern
#endif
#endif
#ifdef __cplusplus
extern "C" {
#endif
#if defined(SOKOL_GFX_INCLUDED) && defined(SOKOL_APP_INCLUDED)
SOKOL_GLUE_API_DECL sg_context_desc sapp_sgcontext(void);
#endif
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif /* SOKOL_GLUE_INCLUDED */
/*-- IMPLEMENTATION ----------------------------------------------------------*/
#ifdef SOKOL_GLUE_IMPL
#define SOKOL_GLUE_IMPL_INCLUDED (1)
#include <string.h> /* memset */
#ifndef SOKOL_API_IMPL
#define SOKOL_API_IMPL
#endif
#if defined(SOKOL_GFX_INCLUDED) && defined(SOKOL_APP_INCLUDED)
SOKOL_API_IMPL sg_context_desc sapp_sgcontext(void) {
sg_context_desc desc;
memset(&desc, 0, sizeof(desc));
desc.color_format = (sg_pixel_format) sapp_color_format();
desc.depth_format = (sg_pixel_format) sapp_depth_format();
desc.sample_count = sapp_sample_count();
desc.gl.force_gles2 = sapp_gles2();
desc.metal.device = sapp_metal_get_device();
desc.metal.renderpass_descriptor_cb = sapp_metal_get_renderpass_descriptor;
desc.metal.drawable_cb = sapp_metal_get_drawable;
desc.d3d11.device = sapp_d3d11_get_device();
desc.d3d11.device_context = sapp_d3d11_get_device_context();
desc.d3d11.render_target_view_cb = sapp_d3d11_get_render_target_view;
desc.d3d11.depth_stencil_view_cb = sapp_d3d11_get_depth_stencil_view;
desc.wgpu.device = sapp_wgpu_get_device();
desc.wgpu.render_view_cb = sapp_wgpu_get_render_view;
desc.wgpu.resolve_view_cb = sapp_wgpu_get_resolve_view;
desc.wgpu.depth_stencil_view_cb = sapp_wgpu_get_depth_stencil_view;
return desc;
}
#endif
#endif /* SOKOL_GLUE_IMPL */

View File

@@ -0,0 +1,323 @@
#if defined(SOKOL_IMPL) && !defined(SOKOL_TIME_IMPL)
#define SOKOL_TIME_IMPL
#endif
#ifndef SOKOL_TIME_INCLUDED
/*
sokol_time.h -- simple cross-platform time measurement
Project URL: https://github.com/floooh/sokol
Do this:
#define SOKOL_IMPL or
#define SOKOL_TIME_IMPL
before you include this file in *one* C or C++ file to create the
implementation.
Optionally provide the following defines with your own implementations:
SOKOL_ASSERT(c) - your own assert macro (default: assert(c))
SOKOL_TIME_API_DECL - public function declaration prefix (default: extern)
SOKOL_API_DECL - same as SOKOL_TIME_API_DECL
SOKOL_API_IMPL - public function implementation prefix (default: -)
If sokol_time.h is compiled as a DLL, define the following before
including the declaration or implementation:
SOKOL_DLL
On Windows, SOKOL_DLL will define SOKOL_TIME_API_DECL as __declspec(dllexport)
or __declspec(dllimport) as needed.
void stm_setup();
Call once before any other functions to initialize sokol_time
(this calls for instance QueryPerformanceFrequency on Windows)
uint64_t stm_now();
Get current point in time in unspecified 'ticks'. The value that
is returned has no relation to the 'wall-clock' time and is
not in a specific time unit, it is only useful to compute
time differences.
uint64_t stm_diff(uint64_t new, uint64_t old);
Computes the time difference between new and old. This will always
return a positive, non-zero value.
uint64_t stm_since(uint64_t start);
Takes the current time, and returns the elapsed time since start
(this is a shortcut for "stm_diff(stm_now(), start)")
uint64_t stm_laptime(uint64_t* last_time);
This is useful for measuring frame time and other recurring
events. It takes the current time, returns the time difference
to the value in last_time, and stores the current time in
last_time for the next call. If the value in last_time is 0,
the return value will be zero (this usually happens on the
very first call).
uint64_t stm_round_to_common_refresh_rate(uint64_t duration)
This oddly named function takes a measured frame time and
returns the closest "nearby" common display refresh rate frame duration
in ticks. If the input duration isn't close to any common display
refresh rate, the input duration will be returned unchanged as a fallback.
The main purpose of this function is to remove jitter/inaccuracies from
measured frame times, and instead use the display refresh rate as
frame duration.
Use the following functions to convert a duration in ticks into
useful time units:
double stm_sec(uint64_t ticks);
double stm_ms(uint64_t ticks);
double stm_us(uint64_t ticks);
double stm_ns(uint64_t ticks);
Converts a tick value into seconds, milliseconds, microseconds
or nanoseconds. Note that not all platforms will have nanosecond
or even microsecond precision.
Uses the following time measurement functions under the hood:
Windows: QueryPerformanceFrequency() / QueryPerformanceCounter()
MacOS/iOS: mach_absolute_time()
emscripten: performance.now()
Linux+others: clock_gettime(CLOCK_MONOTONIC)
zlib/libpng license
Copyright (c) 2018 Andre Weissflog
This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising from the
use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software in a
product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not
be misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.
*/
#define SOKOL_TIME_INCLUDED (1)
#include <stdint.h>
#if defined(SOKOL_API_DECL) && !defined(SOKOL_TIME_API_DECL)
#define SOKOL_TIME_API_DECL SOKOL_API_DECL
#endif
#ifndef SOKOL_TIME_API_DECL
#if defined(_WIN32) && defined(SOKOL_DLL) && defined(SOKOL_TIME_IMPL)
#define SOKOL_TIME_API_DECL __declspec(dllexport)
#elif defined(_WIN32) && defined(SOKOL_DLL)
#define SOKOL_TIME_API_DECL __declspec(dllimport)
#else
#define SOKOL_TIME_API_DECL extern
#endif
#endif
#ifdef __cplusplus
extern "C" {
#endif
SOKOL_TIME_API_DECL void stm_setup(void);
SOKOL_TIME_API_DECL uint64_t stm_now(void);
SOKOL_TIME_API_DECL uint64_t stm_diff(uint64_t new_ticks, uint64_t old_ticks);
SOKOL_TIME_API_DECL uint64_t stm_since(uint64_t start_ticks);
SOKOL_TIME_API_DECL uint64_t stm_laptime(uint64_t* last_time);
SOKOL_TIME_API_DECL uint64_t stm_round_to_common_refresh_rate(uint64_t frame_ticks);
SOKOL_TIME_API_DECL double stm_sec(uint64_t ticks);
SOKOL_TIME_API_DECL double stm_ms(uint64_t ticks);
SOKOL_TIME_API_DECL double stm_us(uint64_t ticks);
SOKOL_TIME_API_DECL double stm_ns(uint64_t ticks);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif // SOKOL_TIME_INCLUDED
/*-- IMPLEMENTATION ----------------------------------------------------------*/
#ifdef SOKOL_TIME_IMPL
#define SOKOL_TIME_IMPL_INCLUDED (1)
#include <string.h> /* memset */
#ifndef SOKOL_API_IMPL
#define SOKOL_API_IMPL
#endif
#ifndef SOKOL_ASSERT
#include <assert.h>
#define SOKOL_ASSERT(c) assert(c)
#endif
#ifndef _SOKOL_PRIVATE
#if defined(__GNUC__) || defined(__clang__)
#define _SOKOL_PRIVATE __attribute__((unused)) static
#else
#define _SOKOL_PRIVATE static
#endif
#endif
#if defined(_WIN32)
#ifndef WIN32_LEAN_AND_MEAN
#define WIN32_LEAN_AND_MEAN
#endif
#include <windows.h>
typedef struct {
uint32_t initialized;
LARGE_INTEGER freq;
LARGE_INTEGER start;
} _stm_state_t;
#elif defined(__APPLE__) && defined(__MACH__)
#include <mach/mach_time.h>
typedef struct {
uint32_t initialized;
mach_timebase_info_data_t timebase;
uint64_t start;
} _stm_state_t;
#elif defined(__EMSCRIPTEN__)
#include <emscripten/emscripten.h>
typedef struct {
uint32_t initialized;
double start;
} _stm_state_t;
#else /* anything else, this will need more care for non-Linux platforms */
#ifdef ESP8266
// On the ESP8266, clock_gettime ignores the first argument and CLOCK_MONOTONIC isn't defined
#define CLOCK_MONOTONIC 0
#endif
#include <time.h>
typedef struct {
uint32_t initialized;
uint64_t start;
} _stm_state_t;
#endif
static _stm_state_t _stm;
/* prevent 64-bit overflow when computing relative timestamp
see https://gist.github.com/jspohr/3dc4f00033d79ec5bdaf67bc46c813e3
*/
#if defined(_WIN32) || (defined(__APPLE__) && defined(__MACH__))
_SOKOL_PRIVATE int64_t int64_muldiv(int64_t value, int64_t numer, int64_t denom) {
int64_t q = value / denom;
int64_t r = value % denom;
return q * numer + r * numer / denom;
}
#endif
#if defined(__EMSCRIPTEN__)
EM_JS(double, stm_js_perfnow, (void), {
return performance.now();
});
#endif
SOKOL_API_IMPL void stm_setup(void) {
memset(&_stm, 0, sizeof(_stm));
_stm.initialized = 0xABCDABCD;
#if defined(_WIN32)
QueryPerformanceFrequency(&_stm.freq);
QueryPerformanceCounter(&_stm.start);
#elif defined(__APPLE__) && defined(__MACH__)
mach_timebase_info(&_stm.timebase);
_stm.start = mach_absolute_time();
#elif defined(__EMSCRIPTEN__)
_stm.start = stm_js_perfnow();
#else
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
_stm.start = (uint64_t)ts.tv_sec*1000000000 + (uint64_t)ts.tv_nsec;
#endif
}
SOKOL_API_IMPL uint64_t stm_now(void) {
SOKOL_ASSERT(_stm.initialized == 0xABCDABCD);
uint64_t now;
#if defined(_WIN32)
LARGE_INTEGER qpc_t;
QueryPerformanceCounter(&qpc_t);
now = (uint64_t) int64_muldiv(qpc_t.QuadPart - _stm.start.QuadPart, 1000000000, _stm.freq.QuadPart);
#elif defined(__APPLE__) && defined(__MACH__)
const uint64_t mach_now = mach_absolute_time() - _stm.start;
now = (uint64_t) int64_muldiv((int64_t)mach_now, (int64_t)_stm.timebase.numer, (int64_t)_stm.timebase.denom);
#elif defined(__EMSCRIPTEN__)
double js_now = stm_js_perfnow() - _stm.start;
SOKOL_ASSERT(js_now >= 0.0);
now = (uint64_t) (js_now * 1000000.0);
#else
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
now = ((uint64_t)ts.tv_sec*1000000000 + (uint64_t)ts.tv_nsec) - _stm.start;
#endif
return now;
}
SOKOL_API_IMPL uint64_t stm_diff(uint64_t new_ticks, uint64_t old_ticks) {
if (new_ticks > old_ticks) {
return new_ticks - old_ticks;
}
else {
return 1;
}
}
SOKOL_API_IMPL uint64_t stm_since(uint64_t start_ticks) {
return stm_diff(stm_now(), start_ticks);
}
SOKOL_API_IMPL uint64_t stm_laptime(uint64_t* last_time) {
SOKOL_ASSERT(last_time);
uint64_t dt = 0;
uint64_t now = stm_now();
if (0 != *last_time) {
dt = stm_diff(now, *last_time);
}
*last_time = now;
return dt;
}
// first number is frame duration in ns, second number is tolerance in ns,
// the resulting min/max values must not overlap!
static const uint64_t _stm_refresh_rates[][2] = {
{ 16666667, 1000000 }, // 60 Hz: 16.6667 +- 1ms
{ 13888889, 250000 }, // 72 Hz: 13.8889 +- 0.25ms
{ 13333333, 250000 }, // 75 Hz: 13.3333 +- 0.25ms
{ 11764706, 250000 }, // 85 Hz: 11.7647 +- 0.25
{ 11111111, 250000 }, // 90 Hz: 11.1111 +- 0.25ms
{ 10000000, 500000 }, // 100 Hz: 10.0000 +- 0.5ms
{ 8333333, 500000 }, // 120 Hz: 8.3333 +- 0.5ms
{ 6944445, 500000 }, // 144 Hz: 6.9445 +- 0.5ms
{ 4166667, 1000000 }, // 240 Hz: 4.1666 +- 1ms
{ 0, 0 }, // keep the last element always at zero
};
SOKOL_API_IMPL uint64_t stm_round_to_common_refresh_rate(uint64_t ticks) {
uint64_t ns;
int i = 0;
while (0 != (ns = _stm_refresh_rates[i][0])) {
uint64_t tol = _stm_refresh_rates[i][1];
if ((ticks > (ns - tol)) && (ticks < (ns + tol))) {
return ns;
}
i++;
}
// fallthough: didn't fit into any buckets
return ticks;
}
SOKOL_API_IMPL double stm_sec(uint64_t ticks) {
return (double)ticks / 1000000000.0;
}
SOKOL_API_IMPL double stm_ms(uint64_t ticks) {
return (double)ticks / 1000000.0;
}
SOKOL_API_IMPL double stm_us(uint64_t ticks) {
return (double)ticks / 1000.0;
}
SOKOL_API_IMPL double stm_ns(uint64_t ticks) {
return (double)ticks;
}
#endif /* SOKOL_TIME_IMPL */

View File

@@ -0,0 +1,658 @@
#ifndef UMATH_H_INCLUDED
#define UMATH_H_INCLUDED
#include <math.h>
#include <float.h>
#include <stdbool.h>
#if defined(_MSC_VER)
#pragma warning(push)
#pragma warning(disable: 4201)
#endif
#define um_inline static inline
#if defined(__cplusplus)
#define um_abi extern "C"
#else
#define um_abi
#endif
typedef struct um_vec2 {
union {
struct { float x, y; };
struct { float v[2]; };
};
} um_vec2;
typedef struct um_vec3 {
union {
struct { float x, y, z; };
struct { um_vec2 xy; };
struct { float v[3]; };
};
} um_vec3;
typedef struct um_vec4 {
union {
struct { float x, y, z, w; };
struct { um_vec3 xyz; };
struct { um_vec2 xy; };
struct { float v[4]; };
};
} um_vec4;
typedef struct um_quat {
union {
struct { float x, y, z, w; };
struct { um_vec4 xyzw; };
struct { um_vec3 xyz; };
struct { float v[4]; };
};
} um_quat;
typedef struct um_mat {
union {
struct { float m[16]; };
struct { um_vec4 cols[4]; };
struct { float m11, m21, m31, m41, m12, m22, m32, m42, m13, m23, m33, m43, m14, m24, m34, m44; };
};
} um_mat;
#define UM_PI (3.14159265358979323846f)
#define UM_2PI (6.28318530717958647692f)
#define UM_RCP_PI (1.0f / 3.14159265358979323846f)
#define UM_RCP_2PI (1.0f / 6.28318530717958647692f)
#define UM_RAD_TO_DEG (180.0f / UM_PI)
#define UM_DEG_TO_RAD (UM_PI / 180.0f)
#if defined(__cplusplus__)
#define um_new(type) type
#else
#define um_new(type) (type)
#endif
#define um_v2(x, y) (um_new(um_vec2){{{ (x), (y) }}})
#define um_v3(x, y, z) (um_new(um_vec3){{{ (x), (y), (z) }}})
#define um_v4(x, y, z, w) (um_new(um_vec4){{{ (x), (y), (z), (w) }}})
#define um_quat_xyzw(x, y, z, w) (um_new(um_quat){{{ (x), (y), (z), (w) }}})
#define um_mat_rows(m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44, ...) \
(um_new(um_mat){{{ \
(m11), (m21), (m31), (m41), \
(m12), (m22), (m32), (m42), \
(m13), (m23), (m33), (m43), \
(m14), (m24), (m34), (m44), }} __VA_ARGS__ })
#define um_mat_cols(m11, m21, m31, m41, m12, m22, m32, m42, m13, m23, m33, m43, m14, m24, m34, m44, ...) \
(um_new(um_mat){{{ \
(m11), (m21), (m31), (m41), \
(m12), (m22), (m32), (m42), \
(m13), (m23), (m33), (m43), \
(m14), (m24), (m34), (m44), }} __VA_ARGS__ })
#define um_zero2 (um_v2(0, 0))
#define um_zero3 (um_v3(0, 0, 0))
#define um_zero4 (um_v4(0, 0, 0, 0))
#define um_one2 (um_v2(1, 1))
#define um_one3 (um_v3(1, 1, 1))
#define um_one4 (um_v4(1, 1, 1, 1))
#define um_quat_identity um_quat_xyzw(0, 0, 0, 1)
extern const um_mat um_mat_identity;
um_inline float um_sqrt(float a) { return sqrtf(a); }
um_inline float um_abs(float a) { return fabsf(a); }
um_inline float um_min(float a, float b) { return a < b ? a : b; }
um_inline float um_max(float a, float b) { return b < a ? a : b; }
um_inline float um_clamp(float a, float minv, float maxv) { return um_min(um_max(a, minv), maxv); }
um_inline float um_lerp(float a, float b, float t) { return a*(1.0f-t) + b*t; }
um_inline float um_smoothstep(float a) { return a * a * (3.0f - 2.0f * a); }
um_inline um_vec2 um_dup2(float a) { return um_v2(a, a); }
um_inline um_vec2 um_add2(um_vec2 a, um_vec2 b) { return um_v2(a.x + b.x, a.y + b.y); }
um_inline um_vec2 um_sub2(um_vec2 a, um_vec2 b) { return um_v2(a.x - b.x, a.y - b.y); }
um_inline um_vec2 um_mul2(um_vec2 a, float b) { return um_v2(a.x * b, a.y * b); }
um_inline um_vec2 um_div2(um_vec2 a, float b) { float v = 1.0f / b; return um_v2(a.x * v, a.y * v); }
um_inline um_vec2 um_mad2(um_vec2 a, um_vec2 b, float c) { return um_v2(a.x + b.x*c, a.y + b.y*c); }
um_inline um_vec2 um_neg2(um_vec2 a) { return um_v2(-a.x, -a.y); }
um_inline um_vec2 um_rcp2(um_vec2 a) { return um_v2(1.0f / a.x, 1.0f / a.y); }
um_inline um_vec2 um_mulv2(um_vec2 a, um_vec2 b) { return um_v2(a.x * b.x, a.y * b.y); }
um_inline um_vec2 um_divv2(um_vec2 a, um_vec2 b) { return um_v2(a.x / b.x, a.y / b.y); }
um_inline float um_dot2(um_vec2 a, um_vec2 b) { return a.x*b.x + a.y*b.y; }
um_inline float um_length2(um_vec2 a) { return um_sqrt(a.x*a.x + a.y*a.y); }
um_inline um_vec2 um_min2(um_vec2 a, um_vec2 b) { return um_v2(um_min(a.x, b.x), um_min(a.y, b.y)); }
um_inline um_vec2 um_max2(um_vec2 a, um_vec2 b) { return um_v2(um_max(a.x, b.x), um_max(a.y, b.y)); }
um_inline um_vec2 um_clamp2(um_vec2 a, um_vec2 minv, um_vec2 maxv) { return um_v2(um_clamp(a.x, minv.x, maxv.x), um_clamp(a.y, minv.y, maxv.y)); }
um_inline um_vec2 um_lerp2(um_vec2 a, um_vec2 b, float t) { return um_v2(um_lerp(a.x, b.x, t), um_lerp(a.y, b.y, t)); }
um_inline um_vec2 um_normalize2(um_vec2 a) { float v = um_length2(a); v = v >= FLT_MIN ? 1.0f / v : 0.0f; return um_v2(a.x * v, a.y * v); }
um_inline bool um_equal2(um_vec2 a, um_vec2 b) { return (a.x == b.x) & (a.y == b.y); }
um_inline um_vec3 um_dup3(float a) { return um_v3(a, a, a); }
um_inline um_vec3 um_add3(um_vec3 a, um_vec3 b) { return um_v3(a.x + b.x, a.y + b.y, a.z + b.z); }
um_inline um_vec3 um_sub3(um_vec3 a, um_vec3 b) { return um_v3(a.x - b.x, a.y - b.y, a.z - b.z); }
um_inline um_vec3 um_mul3(um_vec3 a, float b) { return um_v3(a.x * b, a.y * b, a.z * b); }
um_inline um_vec3 um_div3(um_vec3 a, float b) { float v = 1.0f / b; return um_v3(a.x * v, a.y * v, a.z * v); }
um_inline um_vec3 um_mad3(um_vec3 a, um_vec3 b, float c) { return um_v3(a.x + b.x*c, a.y + b.y*c, a.z + b.z*c); }
um_inline um_vec3 um_neg3(um_vec3 a) { return um_v3(-a.x, -a.y, -a.z); }
um_inline um_vec3 um_rcp3(um_vec3 a) { return um_v3(1.0f / a.x, 1.0f / a.y, 1.0f / a.z); }
um_inline um_vec3 um_mulv3(um_vec3 a, um_vec3 b) { return um_v3(a.x * b.x, a.y * b.y, a.z * b.z); }
um_inline um_vec3 um_divv3(um_vec3 a, um_vec3 b) { return um_v3(a.x / b.x, a.y / b.y, a.z / b.z); }
um_inline float um_dot3(um_vec3 a, um_vec3 b) { return a.x*b.x + a.y*b.y + a.z*b.z; }
um_inline float um_length3(um_vec3 a) { return um_sqrt(a.x*a.x + a.y*a.y + a.z*a.z); }
um_inline um_vec3 um_min3(um_vec3 a, um_vec3 b) { return um_v3(um_min(a.x, b.x), um_min(a.y, b.y), um_min(a.z, b.z)); }
um_inline um_vec3 um_max3(um_vec3 a, um_vec3 b) { return um_v3(um_max(a.x, b.x), um_max(a.y, b.y), um_max(a.z, b.z)); }
um_inline um_vec3 um_clamp3(um_vec3 a, um_vec3 minv, um_vec3 maxv) { return um_v3(um_clamp(a.x, minv.x, maxv.x), um_clamp(a.y, minv.y, maxv.y), um_clamp(a.z, minv.z, maxv.z)); }
um_inline um_vec3 um_lerp3(um_vec3 a, um_vec3 b, float t) { return um_v3(um_lerp(a.x, b.x, t), um_lerp(a.y, b.y, t), um_lerp(a.z, b.z, t)); }
um_inline um_vec3 um_normalize3(um_vec3 a) { float v = um_length3(a); v = v >= FLT_MIN ? 1.0f / v : 0.0f; return um_v3(a.x * v, a.y * v, a.z * v); }
um_inline um_vec3 um_cross3(um_vec3 a, um_vec3 b) { return um_v3(a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y*b.x); }
um_inline bool um_equal3(um_vec3 a, um_vec3 b) { return (a.x == b.x) & (a.y == b.y) & (a.z == b.z); }
um_inline um_vec4 um_dup4(float a) { return um_v4(a, a, a, a); }
um_inline um_vec4 um_add4(um_vec4 a, um_vec4 b) { return um_v4(a.x + b.x, a.y + b.y, a.z + b.z, a.w + b.w); }
um_inline um_vec4 um_sub4(um_vec4 a, um_vec4 b) { return um_v4(a.x - b.x, a.y - b.y, a.z - b.z, a.w - b.w); }
um_inline um_vec4 um_mul4(um_vec4 a, float b) { return um_v4(a.x * b, a.y * b, a.z * b, a.w * b); }
um_inline um_vec4 um_div4(um_vec4 a, float b) { float v = 1.0f / b; return um_v4(a.x * v, a.y * v, a.z * v, a.w * v); }
um_inline um_vec4 um_mad4(um_vec4 a, um_vec4 b, float c) { return um_v4(a.x + b.x*c, a.y + b.y*c, a.z + b.z*c, a.w + b.w*c); }
um_inline um_vec4 um_neg4(um_vec4 a) { return um_v4(-a.x, -a.y, -a.z, -a.w); }
um_inline um_vec4 um_rcp4(um_vec4 a) { return um_v4(1.0f / a.x, 1.0f / a.y, 1.0f / a.z, 1.0f / a.w); }
um_inline um_vec4 um_mulv4(um_vec4 a, um_vec4 b) { return um_v4(a.x * b.x, a.y * b.y, a.z * b.z, a.w * b.w); }
um_inline um_vec4 um_divv4(um_vec4 a, um_vec4 b) { return um_v4(a.x / b.x, a.y / b.y, a.z / b.z, a.w / b.w); }
um_inline float um_dot4(um_vec4 a, um_vec4 b) { return a.x*b.x + a.y*b.y + a.z*b.z + a.w*b.w; }
um_inline float um_length4(um_vec4 a) { return um_sqrt(a.x*a.x + a.y*a.y + a.z*a.z + a.w*a.w); }
um_inline um_vec4 um_min4(um_vec4 a, um_vec4 b) { return um_v4(um_min(a.x, b.x), um_min(a.y, b.y), um_min(a.z, b.z), um_min(a.w, b.w)); }
um_inline um_vec4 um_max4(um_vec4 a, um_vec4 b) { return um_v4(um_max(a.x, b.x), um_max(a.y, b.y), um_max(a.z, b.z), um_max(a.w, b.w)); }
um_inline um_vec4 um_clamp4(um_vec4 a, um_vec4 minv, um_vec4 maxv) { return um_v4(um_clamp(a.x, minv.x, maxv.x), um_clamp(a.y, minv.y, maxv.y), um_clamp(a.z, minv.z, maxv.z), um_clamp(a.w, minv.w, maxv.w)); }
um_inline um_vec4 um_lerp4(um_vec4 a, um_vec4 b, float t) { return um_v4(um_lerp(a.x, b.x, t), um_lerp(a.y, b.y, t), um_lerp(a.z, b.z, t), um_lerp(a.w, b.w, t)); }
um_inline um_vec4 um_normalize4(um_vec4 a) { float v = um_length4(a); v = v >= FLT_MIN ? 1.0f / v : 0.0f; return um_v4(a.x * v, a.y * v, a.z * v, a.w * v); }
um_inline bool um_equal4(um_vec4 a, um_vec4 b) { return (a.x == b.x) & (a.y == b.y) & (a.z == b.z) & (a.w == b.w); }
um_inline um_quat um_quat_add(um_quat a, um_quat b) { return um_quat_xyzw(a.x + b.x, a.y + b.y, a.z + b.z, a.w + b.w); }
um_inline um_quat um_quat_sub(um_quat a, um_quat b) { return um_quat_xyzw(a.x - b.x, a.y - b.y, a.z - b.z, a.w - b.w); }
um_inline um_quat um_quat_mad(um_quat a, um_quat b, float c) { return um_quat_xyzw(a.x + b.x * c, a.y + b.y * c, a.z + b.z * c, a.w + b.w * c); }
um_inline um_quat um_quat_div(um_quat a, float b) { float v = 1.0f / b; return um_quat_xyzw(a.x * v, a.y * v, a.z * v, a.w * v); }
um_inline um_quat um_quat_neg(um_quat a) { return um_quat_xyzw(-a.x, -a.y, -a.z, -a.w); }
um_inline um_quat um_quat_inverse(um_quat a) { return um_quat_div(um_quat_xyzw(-a.x, -a.y, -a.z, a.w), (a.x*a.x + a.y*a.y + a.z*a.z + a.w*a.w)); }
um_inline um_quat um_quat_inverse_normalized(um_quat a) { return um_quat_xyzw(-a.x, -a.y, -a.z, a.w); }
um_inline float um_quat_dot(um_quat a, um_quat b) { return a.x*b.x + a.y*b.y + a.z*b.z + a.w*b.w; }
um_inline float um_quat_length(um_quat a) { return um_sqrt(a.x*a.x + a.y*a.y + a.z*a.z + a.w*a.w); }
um_inline um_quat um_quat_normalize(um_quat a) { float v = um_quat_length(a); v = v >= FLT_MIN ? 1.0f / v : 0.0f; return um_quat_xyzw(a.x * v, a.y * v, a.z * v, a.w * v); }
um_inline bool um_quat_equal(um_quat a, um_quat b) { return (a.x == b.x) & (a.y == b.y) & (a.z == b.z) & (a.w == b.w); }
um_abi um_quat um_quat_mul(um_quat a, um_quat b);
um_abi um_vec3 um_quat_rotate(um_quat a, um_vec3 b);
#define um_quat_mulrev(a, b) um_quat_mul((b), (a))
um_abi um_quat um_quat_lerp(um_quat a, um_quat b, float t);
um_abi um_quat um_quat_slerp(um_quat a, um_quat b, float t);
um_abi um_quat um_quat_axis_angle(um_vec3 axis, float radians);
#define um_mat_is_affine(a) um_equal4((a).cols[3], um_v4(0, 0, 0, 1))
um_abi um_mat um_mat_basis(um_vec3 x, um_vec3 y, um_vec3 z, um_vec3 origin);
um_abi um_mat um_mat_inverse_basis(um_vec3 x, um_vec3 y, um_vec3 z, um_vec3 origin);
um_abi um_mat um_mat_translate(um_vec3 offset);
um_abi um_mat um_mat_scale(um_vec3 scale);
um_abi um_mat um_mat_rotate(um_quat rotation);
um_abi um_mat um_mat_trs(um_vec3 translation, um_quat rotation, um_vec3 scale);
um_abi um_mat um_mat_rotate_x(float radians);
um_abi um_mat um_mat_rotate_y(float radians);
um_abi um_mat um_mat_rotate_z(float radians);
um_abi um_mat um_mat_look_at(um_vec3 eye, um_vec3 target, um_vec3 up_hint);
um_abi um_mat um_mat_perspective_gl(float fov, float aspect, float near_plane, float far_plane);
um_abi um_mat um_mat_perspective_d3d(float fov, float aspect, float near_plane, float far_plane);
um_abi float um_mat_determinant(um_mat a);
um_abi um_mat um_mat_inverse(um_mat a);
um_abi um_mat um_mat_transpose(um_mat a);
um_abi um_mat um_mat_mul(um_mat a, um_mat b);
um_abi um_vec4 um_mat_mull(um_vec4 a, um_mat b);
um_abi um_vec4 um_mat_mulr(um_mat a, um_vec4 b);
#define um_mat_mulrev(a, b) um_mat_mul((b), (a))
um_abi um_mat um_mat_add(um_mat a, um_mat b);
um_abi um_mat um_mat_sub(um_mat a, um_mat b);
um_abi um_mat um_mat_mad(um_mat a, um_mat b, float c);
um_abi um_mat um_mat_muls(um_mat a, float b);
um_abi um_vec3 um_transform_point(const um_mat *a, um_vec3 b);
um_abi um_vec3 um_transform_direction(const um_mat *a, um_vec3 b);
um_abi um_vec3 um_transform_extent(const um_mat *a, um_vec3 b);
#if defined(__cplusplus)
um_inline um_vec2 operator+(const um_vec2 &a, const um_vec2 &b) { return um_add2(a, b); }
um_inline um_vec2 operator-(const um_vec2 &a, const um_vec2 &b) { return um_sub2(a, b); }
um_inline um_vec2 operator*(const um_vec2 &a, const um_vec2 &b) { return um_mulv2(a, b); }
um_inline um_vec2 operator/(const um_vec2 &a, const um_vec2 &b) { return um_divv2(a, b); }
um_inline um_vec2 operator*(const um_vec2 &a, float b) { return um_mul2(a, b); }
um_inline um_vec2 operator/(const um_vec2 &a, float b) { return um_div2(a, b); }
um_inline um_vec2 operator-(const um_vec2 &a) { return um_neg2(a); }
um_inline um_vec3 operator+(const um_vec3 &a, const um_vec3 &b) { return um_add3(a, b); }
um_inline um_vec3 operator-(const um_vec3 &a, const um_vec3 &b) { return um_sub3(a, b); }
um_inline um_vec3 operator*(const um_vec3 &a, const um_vec3 &b) { return um_mulv3(a, b); }
um_inline um_vec3 operator/(const um_vec3 &a, const um_vec3 &b) { return um_divv3(a, b); }
um_inline um_vec3 operator*(const um_vec3 &a, float b) { return um_mul3(a, b); }
um_inline um_vec3 operator/(const um_vec3 &a, float b) { return um_div3(a, b); }
um_inline um_vec3 operator-(const um_vec3 &a) { return um_neg3(a); }
um_inline um_vec4 operator+(const um_vec4 &a, const um_vec4 &b) { return um_add4(a, b); }
um_inline um_vec4 operator-(const um_vec4 &a, const um_vec4 &b) { return um_sub4(a, b); }
um_inline um_vec4 operator*(const um_vec4 &a, const um_vec4 &b) { return um_mulv4(a, b); }
um_inline um_vec4 operator/(const um_vec4 &a, const um_vec4 &b) { return um_divv4(a, b); }
um_inline um_vec4 operator*(const um_vec4 &a, float b) { return um_mul4(a, b); }
um_inline um_vec4 operator/(const um_vec4 &a, float b) { return um_div4(a, b); }
um_inline um_vec4 operator-(const um_vec4 &a) { return um_neg4(a); }
um_inline um_quat operator+(const um_quat &a, const um_quat &b) { return um_quat_add(a, b); }
um_inline um_quat operator-(const um_quat &a, const um_quat &b) { return um_quat_sub(a, b); }
um_inline um_quat operator*(const um_quat &a, const um_quat &b) { return um_quat_mul(a, b); }
um_inline um_mat operator+(const um_mat &a, const um_mat &b) { return um_mat_add(a, b); }
um_inline um_mat operator-(const um_mat &a, const um_mat &b) { return um_mat_sub(a, b); }
um_inline um_mat operator*(const um_mat &a, const um_mat &b) { return um_mat_mul(a, b); }
#endif
#if defined(_MSC_VER)
#pragma warning(pop)
#endif
#endif
#if defined(UMATH_IMPLEMENTATION) || defined(__INTELLISENSE__)
#ifndef UMATH_H_IMPLEMENTED
#define UMATH_H_IMPLEMENTED
const um_mat um_mat_identity = {{{
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f,
}}};
um_abi um_quat um_quat_mul(um_quat a, um_quat b)
{
return um_quat_xyzw(
a.w*b.x + a.x*b.w + a.y*b.z - a.z*b.y,
a.w*b.y - a.x*b.z + a.y*b.w + a.z*b.x,
a.w*b.z + a.x*b.y - a.y*b.x + a.z*b.w,
a.w*b.w - a.x*b.x - a.y*b.y - a.z*b.z);
}
um_abi um_vec3 um_quat_rotate(um_quat a, um_vec3 b)
{
float xy = a.x*b.y - a.y*b.x;
float xz = a.x*b.z - a.z*b.x;
float yz = a.y*b.z - a.z*b.y;
return um_v3(
2.0f * (+ a.w*yz + a.y*xy + a.z*xz) + b.x,
2.0f * (- a.x*xy - a.w*xz + a.z*yz) + b.y,
2.0f * (- a.x*xz - a.y*yz + a.w*xy) + b.z);
}
um_abi um_quat um_quat_lerp(um_quat a, um_quat b, float t)
{
float af = 1.0f - t, bf = t;
float x = af*a.x + bf*b.x;
float y = af*a.y + bf*b.y;
float z = af*a.z + bf*b.z;
float w = af*a.w + bf*b.w;
return um_quat_xyzw(x, y, z, w);
}
um_abi um_quat um_quat_slerp(um_quat a, um_quat b, float t)
{
float dot = a.x*b.x + a.y*b.y + a.z*b.z + a.w*b.w;
if (dot < 0.0f) {
dot = -dot;
b.x = -b.x; b.y = -b.y; b.z = -b.z; b.w = -b.w;
}
float omega = acosf(um_min(um_max(dot, 0.0f), 1.0f));
if (omega <= FLT_MIN) return a;
float rcp_so = 1.0f / sinf(omega);
float af = sinf((1.0f - t) * omega) * rcp_so;
float bf = sinf(t * omega) * rcp_so;
float x = af*a.x + bf*b.x;
float y = af*a.y + bf*b.y;
float z = af*a.z + bf*b.z;
float w = af*a.w + bf*b.w;
return um_quat_normalize(um_quat_xyzw(x, y, z, w));
}
um_abi um_quat um_quat_axis_angle(um_vec3 axis, float radians)
{
axis = um_normalize3(axis);
float c = cosf(radians * 0.5f), s = sinf(radians * 0.5f);
return um_quat_xyzw(axis.x * s, axis.y * s, axis.z * s, c);
}
um_abi um_mat um_mat_basis(um_vec3 x, um_vec3 y, um_vec3 z, um_vec3 origin)
{
return um_mat_rows(
x.x, y.x, z.x, origin.x,
x.y, y.y, z.y, origin.y,
x.z, y.z, z.z, origin.z,
0, 0, 0, 1,
);
}
um_abi um_mat um_mat_inverse_basis(um_vec3 x, um_vec3 y, um_vec3 z, um_vec3 origin)
{
return um_mat_rows(
x.x, x.y, x.z, -um_dot3(origin, x),
y.x, y.y, y.z, -um_dot3(origin, y),
z.x, z.y, z.z, -um_dot3(origin, z),
0, 0, 0, 1);
}
um_abi um_mat um_mat_translate(um_vec3 offset)
{
return um_mat_rows(
1, 0, 0, offset.x,
0, 1, 0, offset.y,
0, 0, 1, offset.z,
0, 0, 0, 1);
}
um_abi um_mat um_mat_scale(um_vec3 scale)
{
return um_mat_rows(
scale.x, 0, 0, 0,
0, scale.y, 0, 0,
0, 0, scale.z, 0,
0, 0, 0, 1);
}
um_abi um_mat um_mat_rotate(um_quat rotation)
{
um_quat q = rotation;
float xx = q.x*q.x, xy = q.x*q.y, xz = q.x*q.z, xw = q.x*q.w;
float yy = q.y*q.y, yz = q.y*q.z, yw = q.y*q.w;
float zz = q.z*q.z, zw = q.z*q.w;
return um_mat_rows(
2.0f * (- yy - zz + 0.5f), 2.0f * (- zw + xy), 2.0f * (+ xz + yw), 0,
2.0f * (+ xy + zw), 2.0f * (- xx - zz + 0.5f), 2.0f * (- xw + yz), 0,
2.0f * (- yw + xz), 2.0f * (+ xw + yz), 2.0f * (- xx - yy + 0.5f), 0,
0, 0, 0, 1,
);
}
um_abi um_mat um_mat_trs(um_vec3 translation, um_quat rotation, um_vec3 scale)
{
um_quat q = rotation;
float xx = q.x*q.x, xy = q.x*q.y, xz = q.x*q.z, xw = q.x*q.w;
float yy = q.y*q.y, yz = q.y*q.z, yw = q.y*q.w;
float zz = q.z*q.z, zw = q.z*q.w;
float sx = 2.0f * scale.x, sy = 2.0f * scale.y, sz = 2.0f * scale.z;
return um_mat_rows(
sx * (- yy - zz + 0.5f), sy * (- zw + xy), sz * (+ xz + yw), translation.x,
sx * (+ xy + zw), sy * (- xx - zz + 0.5f), sz * (- xw + yz), translation.y,
sx * (- yw + xz), sy * (+ xw + yz), sz * (- xx - yy + 0.5f), translation.z,
0, 0, 0, 1,
);
}
um_abi um_mat um_mat_rotate_x(float radians)
{
float c = cosf(radians), s = sinf(radians);
return um_mat_rows(
1, 0, 0, 0,
0, c, -s, 0,
0, s, c, 0,
0, 0, 0, 1,
);
}
um_abi um_mat um_mat_rotate_y(float radians)
{
float c = cosf(radians), s = sinf(radians);
return um_mat_rows(
c, 0, s, 0,
0, 1, 0, 0,
-s, 0, c, 0,
0, 0, 0, 1,
);
}
um_abi um_mat um_mat_rotate_z(float radians)
{
float c = cosf(radians), s = sinf(radians);
return um_mat_rows(
c, -s, 0, 0,
s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1,
);
}
um_abi um_mat um_mat_look_at(um_vec3 eye, um_vec3 target, um_vec3 up_hint)
{
um_vec3 dir = um_normalize3(um_sub3(target, eye));
um_vec3 right = um_normalize3(um_cross3(dir, up_hint));
um_vec3 up = um_normalize3(um_cross3(right, dir));
return um_mat_inverse_basis(right, up, dir, eye);
}
um_abi um_mat um_mat_perspective_d3d(float fov, float aspect, float near_plane, float far_plane)
{
float tan_fov = 1.0f / tanf(fov / 2.0f);
float n = near_plane, f = far_plane;
return um_mat_rows(
tan_fov / aspect, 0, 0, 0,
0, tan_fov, 0, 0,
0, 0, f / (f-n), -(f*n)/(f-n),
0, 0, 1, 0);
}
um_abi um_mat um_mat_perspective_gl(float fov, float aspect, float near_plane, float far_plane)
{
float tan_fov = 1.0f / tanf(fov / 2.0f);
float n = near_plane, f = far_plane;
return um_mat_rows(
tan_fov / aspect, 0, 0, 0,
0, tan_fov, 0, 0,
0, 0, (f+n) / (f-n), -2.0f * (f*n)/(f-n),
0, 0, 1, 0);
}
um_abi float um_mat_determinant(um_mat a)
{
if (um_mat_is_affine(a)) {
return
- a.m14*a.m22*a.m41 + a.m12*a.m24*a.m41 + a.m14*a.m21*a.m42
- a.m11*a.m24*a.m42 - a.m12*a.m21*a.m44 + a.m11*a.m22*a.m44;
} else {
return
+ a.m14*a.m23*a.m32*a.m41 - a.m13*a.m24*a.m32*a.m41 - a.m14*a.m22*a.m33*a.m41 + a.m12*a.m24*a.m33*a.m41
+ a.m13*a.m22*a.m34*a.m41 - a.m12*a.m23*a.m34*a.m41 - a.m14*a.m23*a.m31*a.m42 + a.m13*a.m24*a.m31*a.m42
+ a.m14*a.m21*a.m33*a.m42 - a.m11*a.m24*a.m33*a.m42 - a.m13*a.m21*a.m34*a.m42 + a.m11*a.m23*a.m34*a.m42
+ a.m14*a.m22*a.m31*a.m43 - a.m12*a.m24*a.m31*a.m43 - a.m14*a.m21*a.m32*a.m43 + a.m11*a.m24*a.m32*a.m43
+ a.m12*a.m21*a.m34*a.m43 - a.m11*a.m22*a.m34*a.m43 - a.m13*a.m22*a.m31*a.m44 + a.m12*a.m23*a.m31*a.m44
+ a.m13*a.m21*a.m32*a.m44 - a.m11*a.m23*a.m32*a.m44 - a.m12*a.m21*a.m33*a.m44 + a.m11*a.m22*a.m33*a.m44;
}
}
um_abi um_mat um_mat_inverse(um_mat a)
{
if (um_mat_is_affine(a)) {
float det =
- a.m13*a.m22*a.m31 + a.m12*a.m23*a.m31 + a.m13*a.m21*a.m32
- a.m11*a.m23*a.m32 - a.m12*a.m21*a.m33 + a.m11*a.m22*a.m33;
float rcp_det = 1.0f / det;
return um_mat_rows(
( - a.m23*a.m32 + a.m22*a.m33) * rcp_det,
( + a.m13*a.m32 - a.m12*a.m33) * rcp_det,
( - a.m13*a.m22 + a.m12*a.m23) * rcp_det,
(a.m14*a.m23*a.m32 - a.m13*a.m24*a.m32 - a.m14*a.m22*a.m33 + a.m12*a.m24*a.m33 + a.m13*a.m22*a.m34 - a.m12*a.m23*a.m34) * rcp_det,
( + a.m23*a.m31 - a.m21*a.m33) * rcp_det,
( - a.m13*a.m31 + a.m11*a.m33) * rcp_det,
( + a.m13*a.m21 - a.m11*a.m23) * rcp_det,
(a.m13*a.m24*a.m31 - a.m14*a.m23*a.m31 + a.m14*a.m21*a.m33 - a.m11*a.m24*a.m33 - a.m13*a.m21*a.m34 + a.m11*a.m23*a.m34) * rcp_det,
( - a.m22*a.m31 + a.m21*a.m32) * rcp_det,
( + a.m12*a.m31 - a.m11*a.m32) * rcp_det,
( - a.m12*a.m21 + a.m11*a.m22) * rcp_det,
(a.m14*a.m22*a.m31 - a.m12*a.m24*a.m31 - a.m14*a.m21*a.m32 + a.m11*a.m24*a.m32 + a.m12*a.m21*a.m34 - a.m11*a.m22*a.m34) * rcp_det,
0, 0, 0, 1
);
} else {
float det =
+ a.m14*a.m23*a.m32*a.m41 - a.m13*a.m24*a.m32*a.m41 - a.m14*a.m22*a.m33*a.m41 + a.m12*a.m24*a.m33*a.m41
+ a.m13*a.m22*a.m34*a.m41 - a.m12*a.m23*a.m34*a.m41 - a.m14*a.m23*a.m31*a.m42 + a.m13*a.m24*a.m31*a.m42
+ a.m14*a.m21*a.m33*a.m42 - a.m11*a.m24*a.m33*a.m42 - a.m13*a.m21*a.m34*a.m42 + a.m11*a.m23*a.m34*a.m42
+ a.m14*a.m22*a.m31*a.m43 - a.m12*a.m24*a.m31*a.m43 - a.m14*a.m21*a.m32*a.m43 + a.m11*a.m24*a.m32*a.m43
+ a.m12*a.m21*a.m34*a.m43 - a.m11*a.m22*a.m34*a.m43 - a.m13*a.m22*a.m31*a.m44 + a.m12*a.m23*a.m31*a.m44
+ a.m13*a.m21*a.m32*a.m44 - a.m11*a.m23*a.m32*a.m44 - a.m12*a.m21*a.m33*a.m44 + a.m11*a.m22*a.m33*a.m44;
float rcp_det = 1.0f / det;
return um_mat_rows(
(a.m23*a.m34*a.m42 - a.m24*a.m33*a.m42 + a.m24*a.m32*a.m43 - a.m22*a.m34*a.m43 - a.m23*a.m32*a.m44 + a.m22*a.m33*a.m44) * rcp_det,
(a.m14*a.m33*a.m42 - a.m13*a.m34*a.m42 - a.m14*a.m32*a.m43 + a.m12*a.m34*a.m43 + a.m13*a.m32*a.m44 - a.m12*a.m33*a.m44) * rcp_det,
(a.m13*a.m24*a.m42 - a.m14*a.m23*a.m42 + a.m14*a.m22*a.m43 - a.m12*a.m24*a.m43 - a.m13*a.m22*a.m44 + a.m12*a.m23*a.m44) * rcp_det,
(a.m14*a.m23*a.m32 - a.m13*a.m24*a.m32 - a.m14*a.m22*a.m33 + a.m12*a.m24*a.m33 + a.m13*a.m22*a.m34 - a.m12*a.m23*a.m34) * rcp_det,
(a.m24*a.m33*a.m41 - a.m23*a.m34*a.m41 - a.m24*a.m31*a.m43 + a.m21*a.m34*a.m43 + a.m23*a.m31*a.m44 - a.m21*a.m33*a.m44) * rcp_det,
(a.m13*a.m34*a.m41 - a.m14*a.m33*a.m41 + a.m14*a.m31*a.m43 - a.m11*a.m34*a.m43 - a.m13*a.m31*a.m44 + a.m11*a.m33*a.m44) * rcp_det,
(a.m14*a.m23*a.m41 - a.m13*a.m24*a.m41 - a.m14*a.m21*a.m43 + a.m11*a.m24*a.m43 + a.m13*a.m21*a.m44 - a.m11*a.m23*a.m44) * rcp_det,
(a.m13*a.m24*a.m31 - a.m14*a.m23*a.m31 + a.m14*a.m21*a.m33 - a.m11*a.m24*a.m33 - a.m13*a.m21*a.m34 + a.m11*a.m23*a.m34) * rcp_det,
(a.m22*a.m34*a.m41 - a.m24*a.m32*a.m41 + a.m24*a.m31*a.m42 - a.m21*a.m34*a.m42 - a.m22*a.m31*a.m44 + a.m21*a.m32*a.m44) * rcp_det,
(a.m14*a.m32*a.m41 - a.m12*a.m34*a.m41 - a.m14*a.m31*a.m42 + a.m11*a.m34*a.m42 + a.m12*a.m31*a.m44 - a.m11*a.m32*a.m44) * rcp_det,
(a.m12*a.m24*a.m41 - a.m14*a.m22*a.m41 + a.m14*a.m21*a.m42 - a.m11*a.m24*a.m42 - a.m12*a.m21*a.m44 + a.m11*a.m22*a.m44) * rcp_det,
(a.m14*a.m22*a.m31 - a.m12*a.m24*a.m31 - a.m14*a.m21*a.m32 + a.m11*a.m24*a.m32 + a.m12*a.m21*a.m34 - a.m11*a.m22*a.m34) * rcp_det,
(a.m23*a.m32*a.m41 - a.m22*a.m33*a.m41 - a.m23*a.m31*a.m42 + a.m21*a.m33*a.m42 + a.m22*a.m31*a.m43 - a.m21*a.m32*a.m43) * rcp_det,
(a.m12*a.m33*a.m41 - a.m13*a.m32*a.m41 + a.m13*a.m31*a.m42 - a.m11*a.m33*a.m42 - a.m12*a.m31*a.m43 + a.m11*a.m32*a.m43) * rcp_det,
(a.m13*a.m22*a.m41 - a.m12*a.m23*a.m41 - a.m13*a.m21*a.m42 + a.m11*a.m23*a.m42 + a.m12*a.m21*a.m43 - a.m11*a.m22*a.m43) * rcp_det,
(a.m12*a.m23*a.m31 - a.m13*a.m22*a.m31 + a.m13*a.m21*a.m32 - a.m11*a.m23*a.m32 - a.m12*a.m21*a.m33 + a.m11*a.m22*a.m33) * rcp_det,
);
}
}
um_abi um_mat um_mat_transpose(um_mat a)
{
return um_mat_rows(
a.m11, a.m21, a.m31, a.m41,
a.m12, a.m22, a.m32, a.m42,
a.m13, a.m23, a.m33, a.m43,
a.m14, a.m24, a.m34, a.m44,
);
}
um_abi um_mat um_mat_mul(um_mat a, um_mat b)
{
return um_mat_rows(
a.m11*b.m11 + a.m12*b.m21 + a.m13*b.m31 + a.m14*b.m41,
a.m11*b.m12 + a.m12*b.m22 + a.m13*b.m32 + a.m14*b.m42,
a.m11*b.m13 + a.m12*b.m23 + a.m13*b.m33 + a.m14*b.m43,
a.m11*b.m14 + a.m12*b.m24 + a.m13*b.m34 + a.m14*b.m44,
a.m21*b.m11 + a.m22*b.m21 + a.m23*b.m31 + a.m24*b.m41,
a.m21*b.m12 + a.m22*b.m22 + a.m23*b.m32 + a.m24*b.m42,
a.m21*b.m13 + a.m22*b.m23 + a.m23*b.m33 + a.m24*b.m43,
a.m21*b.m14 + a.m22*b.m24 + a.m23*b.m34 + a.m24*b.m44,
a.m31*b.m11 + a.m32*b.m21 + a.m33*b.m31 + a.m34*b.m41,
a.m31*b.m12 + a.m32*b.m22 + a.m33*b.m32 + a.m34*b.m42,
a.m31*b.m13 + a.m32*b.m23 + a.m33*b.m33 + a.m34*b.m43,
a.m31*b.m14 + a.m32*b.m24 + a.m33*b.m34 + a.m34*b.m44,
a.m41*b.m11 + a.m42*b.m21 + a.m43*b.m31 + a.m44*b.m41,
a.m41*b.m12 + a.m42*b.m22 + a.m43*b.m32 + a.m44*b.m42,
a.m41*b.m13 + a.m42*b.m23 + a.m43*b.m33 + a.m44*b.m43,
a.m41*b.m14 + a.m42*b.m24 + a.m43*b.m34 + a.m44*b.m44,
);
}
um_abi um_mat um_mat_add(um_mat a, um_mat b)
{
return um_mat_rows(
a.m11 + b.m11, a.m12 + b.m12, a.m13 + b.m13, a.m14 + b.m14,
a.m21 + b.m21, a.m22 + b.m22, a.m23 + b.m23, a.m24 + b.m24,
a.m31 + b.m31, a.m32 + b.m32, a.m33 + b.m33, a.m34 + b.m34,
a.m41 + b.m41, a.m42 + b.m42, a.m43 + b.m43, a.m44 + b.m44,
);
}
um_abi um_mat um_mat_sub(um_mat a, um_mat b)
{
return um_mat_rows(
a.m11 - b.m11, a.m12 - b.m12, a.m13 - b.m13, a.m14 - b.m14,
a.m21 - b.m21, a.m22 - b.m22, a.m23 - b.m23, a.m24 - b.m24,
a.m31 - b.m31, a.m32 - b.m32, a.m33 - b.m33, a.m34 - b.m34,
a.m41 - b.m41, a.m42 - b.m42, a.m43 - b.m43, a.m44 - b.m44,
);
}
um_abi um_mat um_mat_mad(um_mat a, um_mat b, float c)
{
return um_mat_rows(
a.m11 + b.m11 * c, a.m12 + b.m12 * c, a.m13 + b.m13 * c, a.m14 + b.m14 * c,
a.m21 + b.m21 * c, a.m22 + b.m22 * c, a.m23 + b.m23 * c, a.m24 + b.m24 * c,
a.m31 + b.m31 * c, a.m32 + b.m32 * c, a.m33 + b.m33 * c, a.m34 + b.m34 * c,
a.m41 + b.m41 * c, a.m42 + b.m42 * c, a.m43 + b.m43 * c, a.m44 + b.m44 * c,
);
}
um_abi um_mat um_mat_muls(um_mat a, float b)
{
return um_mat_rows(
a.m11 * b, a.m12 * b, a.m13 * b, a.m14 * b,
a.m21 * b, a.m22 * b, a.m23 * b, a.m24 * b,
a.m31 * b, a.m32 * b, a.m33 * b, a.m34 * b,
a.m41 * b, a.m42 * b, a.m43 * b, a.m44 * b,
);
}
um_abi um_vec4 um_mat_mull(um_vec4 a, um_mat b)
{
return um_v4(
a.x*b.m11 + a.y*b.m21 + a.z*b.m31 + a.w*b.m41,
a.x*b.m12 + a.y*b.m22 + a.z*b.m32 + a.w*b.m42,
a.x*b.m13 + a.y*b.m23 + a.z*b.m33 + a.w*b.m43,
a.x*b.m14 + a.y*b.m24 + a.z*b.m34 + a.w*b.m44);
}
um_abi um_vec4 um_mat_mulr(um_mat a, um_vec4 b)
{
return um_v4(
a.m11*b.x + a.m12*b.y + a.m13*b.z + a.m14*b.w,
a.m21*b.x + a.m22*b.y + a.m23*b.z + a.m24*b.w,
a.m31*b.x + a.m32*b.y + a.m33*b.z + a.m34*b.w,
a.m41*b.x + a.m42*b.y + a.m43*b.z + a.m44*b.w);
}
um_abi um_vec3 um_transform_point(const um_mat *a, um_vec3 b)
{
return um_v3(
a->m11*b.x + a->m12*b.y + a->m13*b.z + a->m14,
a->m21*b.x + a->m22*b.y + a->m23*b.z + a->m24,
a->m31*b.x + a->m32*b.y + a->m33*b.z + a->m34);
}
um_abi um_vec3 um_transform_direction(const um_mat *a, um_vec3 b)
{
return um_v3(
a->m11*b.x + a->m12*b.y + a->m13*b.z,
a->m21*b.x + a->m22*b.y + a->m23*b.z,
a->m31*b.x + a->m32*b.y + a->m33*b.z);
}
um_abi um_vec3 um_transform_extent(const um_mat *a, um_vec3 b)
{
return um_v3(
um_abs(a->m11)*b.x + um_abs(a->m12)*b.y + um_abs(a->m13)*b.z,
um_abs(a->m21)*b.x + um_abs(a->m22)*b.y + um_abs(a->m23)*b.z,
um_abs(a->m31)*b.x + um_abs(a->m32)*b.y + um_abs(a->m23)*b.z);
}
#endif
#endif

View File

@@ -0,0 +1,120 @@
@ctype vec3 um_vec3
@ctype vec4 um_vec4
@ctype mat4 um_mat
@block vertex_shared
layout(binding=0) uniform mesh_vertex_ubo {
mat4 geometry_to_world;
mat4 normal_to_world;
mat4 world_to_clip;
vec4 blend_weights[16];
float f_num_blend_shapes;
};
layout(binding=0) uniform sampler2DArray blend_shapes;
vec3 evaluate_blend_shape(int vertex_index)
{
ivec2 coord = ivec2(vertex_index & (2048 - 1), vertex_index >> 11);
int num_blend_shapes = int(f_num_blend_shapes);
vec3 offset = vec3(0.0);
for (int i = 0; i < num_blend_shapes; i++) {
vec4 packed = blend_weights[i >> 2];
float weight = packed[i & 3];
offset += weight * texelFetch(blend_shapes, ivec3(coord, i), 0).xyz;
}
return offset;
}
@end
@vs static_vertex
@include_block vertex_shared
layout(location=0) in vec3 a_position;
layout(location=1) in vec3 a_normal;
layout(location=2) in vec2 a_uv;
layout(location=3) in float a_vertex_index;
out vec3 v_normal;
out vec2 v_uv;
void main()
{
vec3 local_pos = a_position;
local_pos += evaluate_blend_shape(int(a_vertex_index));
vec3 world_pos = (geometry_to_world * vec4(local_pos, 1.0)).xyz;
gl_Position = world_to_clip * vec4(world_pos, 1.0);
v_normal = normalize((normal_to_world * vec4(a_normal, 0.0)).xyz);
v_uv = a_uv;
}
@end
@vs skinned_vertex
@include_block vertex_shared
layout(binding=1) uniform skin_vertex_ubo {
mat4 bones[64];
};
layout(location=0) in vec3 a_position;
layout(location=1) in vec3 a_normal;
layout(location=2) in vec2 a_uv;
layout(location=3) in float a_vertex_index;
#if SOKOL_GLSL
layout(location=4) in vec4 a_bone_indices;
#else
layout(location=4) in ivec4 a_bone_indices;
#endif
layout(location=5) in vec4 a_bone_weights;
out vec3 v_normal;
out vec2 v_uv;
void main()
{
mat4 bind_to_world
= bones[int(a_bone_indices.x)] * a_bone_weights.x
+ bones[int(a_bone_indices.y)] * a_bone_weights.y
+ bones[int(a_bone_indices.z)] * a_bone_weights.z
+ bones[int(a_bone_indices.w)] * a_bone_weights.w;
vec3 local_pos = a_position;
local_pos += evaluate_blend_shape(int(a_vertex_index));
vec3 world_pos = (bind_to_world * vec4(local_pos, 1.0)).xyz;
vec3 world_normal = (bind_to_world * vec4(a_normal, 0.0)).xyz;
gl_Position = world_to_clip * vec4(world_pos, 1.0);
v_normal = normalize(world_normal);
v_uv = a_uv;
}
@end
@fs lit_pixel
in vec3 v_normal;
in vec2 v_uv;
out vec4 o_color;
void main()
{
float l = dot(v_normal, normalize(vec3(1.0, 1.0, 1.0)));
// HACK: We need to use UV here somehow so it doesn't get stripped..
// TODO: Implement textures
l += v_uv.x * 0.0001;
l = l * 0.5 + 0.5;
o_color = vec4(l, l, l, 1.0);
}
@end
@program static_lit static_vertex lit_pixel
@program skinned_lit skinned_vertex lit_pixel

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,981 @@
#include "external/sokol_app.h"
#include "external/sokol_gfx.h"
#include "external/sokol_time.h"
#include "external/sokol_glue.h"
#include "external/umath.h"
#include "../../ufbx.h"
#include "shaders/mesh.h"
#include <stdlib.h>
#include <stdio.h>
#include <assert.h>
#define MAX_BONES 64
#define MAX_BLEND_SHAPES 64
um_vec2 ufbx_to_um_vec2(ufbx_vec2 v) { return um_v2((float)v.x, (float)v.y); }
um_vec3 ufbx_to_um_vec3(ufbx_vec3 v) { return um_v3((float)v.x, (float)v.y, (float)v.z); }
um_quat ufbx_to_um_quat(ufbx_quat v) { return um_quat_xyzw((float)v.x, (float)v.y, (float)v.z, (float)v.w); }
um_mat ufbx_to_um_mat(ufbx_matrix m) {
return um_mat_rows(
(float)m.m00, (float)m.m01, (float)m.m02, (float)m.m03,
(float)m.m10, (float)m.m11, (float)m.m12, (float)m.m13,
(float)m.m20, (float)m.m21, (float)m.m22, (float)m.m23,
0, 0, 0, 1,
);
}
typedef struct mesh_vertex {
um_vec3 position;
um_vec3 normal;
um_vec2 uv;
float f_vertex_index;
} mesh_vertex;
typedef struct skin_vertex {
uint8_t bone_index[4];
uint8_t bone_weight[4];
} skin_vertex;
static const sg_layout_desc mesh_vertex_layout = {
.attrs = {
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT3 },
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT3 },
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT2 },
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT },
},
};
static const sg_layout_desc skinned_mesh_vertex_layout = {
.attrs = {
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT3 },
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT3 },
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT2 },
{ .buffer_index = 0, .format = SG_VERTEXFORMAT_FLOAT },
{ .buffer_index = 1, .format = SG_VERTEXFORMAT_BYTE4 },
{ .buffer_index = 1, .format = SG_VERTEXFORMAT_UBYTE4N },
},
};
void print_error(const ufbx_error *error, const char *description)
{
char buffer[1024];
ufbx_format_error(buffer, sizeof(buffer), error);
fprintf(stderr, "%s\n%s\n", description, buffer);
}
void *alloc_imp(size_t type_size, size_t count)
{
void *ptr = malloc(type_size * count);
if (!ptr) {
fprintf(stderr, "Out of memory\n");
exit(1);
}
memset(ptr, 0, type_size * count);
return ptr;
}
void *alloc_dup_imp(size_t type_size, size_t count, const void *data)
{
void *ptr = malloc(type_size * count);
if (!ptr) {
fprintf(stderr, "Out of memory\n");
exit(1);
}
memcpy(ptr, data, type_size * count);
return ptr;
}
#define alloc(m_type, m_count) (m_type*)alloc_imp(sizeof(m_type), (m_count))
#define alloc_dup(m_type, m_count, m_data) (m_type*)alloc_dup_imp(sizeof(m_type), (m_count), (m_data))
size_t min_sz(size_t a, size_t b) { return a < b ? a : b; }
size_t max_sz(size_t a, size_t b) { return b < a ? a : b; }
size_t clamp_sz(size_t a, size_t min_a, size_t max_a) { return min_sz(max_sz(a, min_a), max_a); }
typedef struct viewer_node_anim {
float time_begin;
float framerate;
size_t num_frames;
um_quat const_rot;
um_vec3 const_pos;
um_vec3 const_scale;
um_quat *rot;
um_vec3 *pos;
um_vec3 *scale;
} viewer_node_anim;
typedef struct viewer_blend_channel_anim {
float const_weight;
float *weight;
} viewer_blend_channel_anim;
typedef struct viewer_anim {
const char *name;
float time_begin;
float time_end;
float framerate;
size_t num_frames;
viewer_node_anim *nodes;
viewer_blend_channel_anim *blend_channels;
} viewer_anim;
typedef struct viewer_node {
int32_t parent_index;
um_mat geometry_to_node;
um_mat node_to_parent;
um_mat node_to_world;
um_mat geometry_to_world;
um_mat normal_to_world;
} viewer_node;
typedef struct viewer_blend_channel {
float weight;
} viewer_blend_channel;
typedef struct viewer_mesh_part {
sg_buffer vertex_buffer;
sg_buffer index_buffer;
sg_buffer skin_buffer; // Optional
size_t num_indices;
int32_t material_index;
} viewer_mesh_part;
typedef struct viewer_mesh {
int32_t *instance_node_indices;
size_t num_instances;
viewer_mesh_part *parts;
size_t num_parts;
bool aabb_is_local;
um_vec3 aabb_min;
um_vec3 aabb_max;
// Skinning (optional)
bool skinned;
size_t num_bones;
int32_t bone_indices[MAX_BONES];
um_mat bone_matrices[MAX_BONES];
// Blend shapes (optional)
size_t num_blend_shapes;
sg_image blend_shape_image;
int32_t blend_channel_indices[MAX_BLEND_SHAPES];
} viewer_mesh;
typedef struct viewer_scene {
viewer_node *nodes;
size_t num_nodes;
viewer_mesh *meshes;
size_t num_meshes;
viewer_blend_channel *blend_channels;
size_t num_blend_channels;
viewer_anim *animations;
size_t num_animations;
um_vec3 aabb_min;
um_vec3 aabb_max;
} viewer_scene;
typedef struct viewer {
viewer_scene scene;
float anim_time;
sg_shader shader_mesh_lit_static;
sg_shader shader_mesh_lit_skinned;
sg_pipeline pipe_mesh_lit_static;
sg_pipeline pipe_mesh_lit_skinned;
sg_image empty_blend_shape_image;
um_mat world_to_view;
um_mat view_to_clip;
um_mat world_to_clip;
float camera_yaw;
float camera_pitch;
float camera_distance;
uint32_t mouse_buttons;
} viewer;
void read_node(viewer_node *vnode, ufbx_node *node)
{
vnode->parent_index = node->parent ? node->parent->typed_id : -1;
vnode->node_to_parent = ufbx_to_um_mat(node->node_to_parent);
vnode->node_to_world = ufbx_to_um_mat(node->node_to_world);
vnode->geometry_to_node = ufbx_to_um_mat(node->geometry_to_node);
vnode->geometry_to_world = ufbx_to_um_mat(node->geometry_to_world);
vnode->normal_to_world = ufbx_to_um_mat(ufbx_matrix_for_normals(&node->geometry_to_world));
}
sg_image pack_blend_channels_to_image(ufbx_mesh *mesh, ufbx_blend_channel **channels, size_t num_channels)
{
// We pack the blend shape data into a 1024xNxM texture array where each texel
// contains the vertex `Y*1024 + X` for blend shape `Z`.
uint32_t tex_width = 1024;
uint32_t tex_height_min = ((uint32_t)mesh->num_vertices + tex_width - 1) / tex_width;
uint32_t tex_slices = (uint32_t)num_channels;
// Let's make the texture size a power of two just to be sure...
uint32_t tex_height = 1;
while (tex_height < tex_height_min) {
tex_height *= 2;
}
// NOTE: A proper implementation would probably compress the shape offsets to FP16
// or some other quantization to save space, we use full FP32 here for simplicity.
size_t tex_texels = tex_width * tex_height * tex_slices;
um_vec4 *tex_data = alloc(um_vec4, tex_texels);
// Copy the vertex offsets from each blend shape
for (uint32_t ci = 0; ci < num_channels; ci++) {
ufbx_blend_channel *chan = channels[ci];
um_vec4 *slice_data = tex_data + tex_width * tex_height * ci;
// Let's use the last blend shape if there's multiple blend phases as we don't
// support it. Fortunately this feature is quite rarely used in practice.
ufbx_blend_shape *shape = chan->keyframes.data[chan->keyframes.count - 1].shape;
for (size_t oi = 0; oi < shape->num_offsets; oi++) {
uint32_t ix = (uint32_t)shape->offset_vertices.data[oi];
if (ix < mesh->num_vertices) {
// We don't need to do any indexing to X/Y here as the memory layout of
// `slice_data` pixels is the same as the linear buffer would be.
slice_data[ix].xyz = ufbx_to_um_vec3(shape->position_offsets.data[oi]);
}
}
}
// Upload the combined blend offset image to the GPU
sg_image image = sg_make_image(&(sg_image_desc){
.type = SG_IMAGETYPE_ARRAY,
.width = (int)tex_width,
.height = (int)tex_height,
.num_slices = tex_slices,
.pixel_format = SG_PIXELFORMAT_RGBA32F,
.data.subimage[0][0] = { tex_data, tex_texels * sizeof(um_vec4) },
});
free(tex_data);
return image;
}
void read_mesh(viewer_mesh *vmesh, ufbx_mesh *mesh)
{
// Count the number of needed parts and temporary buffers
size_t max_parts = 0;
size_t max_triangles = 0;
// We need to render each material of the mesh in a separate part, so let's
// count the number of parts and maximum number of triangles needed.
for (size_t pi = 0; pi < mesh->materials.count; pi++) {
ufbx_mesh_material *mesh_mat = &mesh->materials.data[pi];
if (mesh_mat->num_triangles == 0) continue;
max_parts += 1;
max_triangles = max_sz(max_triangles, mesh_mat->num_triangles);
}
// Temporary buffers
size_t num_tri_indices = mesh->max_face_triangles * 3;
uint32_t *tri_indices = alloc(uint32_t, num_tri_indices);
mesh_vertex *vertices = alloc(mesh_vertex, max_triangles * 3);
skin_vertex *skin_vertices = alloc(skin_vertex, max_triangles * 3);
skin_vertex *mesh_skin_vertices = alloc(skin_vertex, mesh->num_vertices);
uint32_t *indices = alloc(uint32_t, max_triangles * 3);
// Result buffers
viewer_mesh_part *parts = alloc(viewer_mesh_part, max_parts);
size_t num_parts = 0;
// In FBX files a single mesh can be instanced by multiple nodes. ufbx handles the connection
// in two ways: (1) `ufbx_node.mesh/light/camera/etc` contains pointer to the data "attribute"
// that node uses and (2) each element that can be connected to a node contains a list of
// `ufbx_node*` instances eg. `ufbx_mesh.instances`.
vmesh->num_instances = mesh->instances.count;
vmesh->instance_node_indices = alloc(int32_t, mesh->instances.count);
for (size_t i = 0; i < mesh->instances.count; i++) {
vmesh->instance_node_indices[i] = (int32_t)mesh->instances.data[i]->typed_id;
}
// Create the vertex buffers
size_t num_blend_shapes = 0;
ufbx_blend_channel *blend_channels[MAX_BLEND_SHAPES];
size_t num_bones = 0;
ufbx_skin_deformer *skin = NULL;
if (mesh->skin_deformers.count > 0) {
vmesh->skinned = true;
// Having multiple skin deformers attached at once is exceedingly rare so we can just
// pick the first one without having to worry too much about it.
skin = mesh->skin_deformers.data[0];
// NOTE: A proper implementation would split meshes with too many bones to chunks but
// for simplicity we're going to just pick the first `MAX_BONES` ones.
for (size_t ci = 0; ci < skin->clusters.count; ci++) {
ufbx_skin_cluster *cluster = skin->clusters.data[ci];
if (num_bones < MAX_BONES) {
vmesh->bone_indices[num_bones] = (int32_t)cluster->bone_node->typed_id;
vmesh->bone_matrices[num_bones] = ufbx_to_um_mat(cluster->geometry_to_bone);
num_bones++;
}
}
vmesh->num_bones = num_bones;
// Pre-calculate the skinned vertex bones/weights for each vertex as they will probably
// be shared by multiple indices.
for (size_t vi = 0; vi < mesh->num_vertices; vi++) {
size_t num_weights = 0;
float total_weight = 0.0f;
float weights[4] = { 0.0f };
uint8_t clusters[4] = { 0 };
// `ufbx_skin_vertex` contains the offset and number of weights that deform the vertex
// in a descending weight order so we can pick the first N weights to use and get a
// reasonable approximation of the skinning.
ufbx_skin_vertex vertex_weights = skin->vertices.data[vi];
for (size_t wi = 0; wi < vertex_weights.num_weights; wi++) {
if (num_weights >= 4) break;
ufbx_skin_weight weight = skin->weights.data[vertex_weights.weight_begin + wi];
// Since we only support a fixed amount of bones up to `MAX_BONES` and we take the
// first N ones we need to ignore weights with too high `cluster_index`.
if (weight.cluster_index < MAX_BONES) {
total_weight += (float)weight.weight;
clusters[num_weights] = (uint8_t)weight.cluster_index;
weights[num_weights] = (float)weight.weight;
num_weights++;
}
}
// Normalize and quantize the weights to 8 bits. We need to be a bit careful to make
// sure the _quantized_ sum is normalized ie. all 8-bit values sum to 255.
if (total_weight > 0.0f) {
skin_vertex *skin_vert = &mesh_skin_vertices[vi];
uint32_t quantized_sum = 0;
for (size_t i = 0; i < 4; i++) {
uint8_t quantized_weight = (uint8_t)((float)weights[i] / total_weight * 255.0f);
quantized_sum += quantized_weight;
skin_vert->bone_index[i] = clusters[i];
skin_vert->bone_weight[i] = quantized_weight;
}
skin_vert->bone_weight[0] += 255 - quantized_sum;
}
}
}
// Fetch blend channels from all attached blend deformers.
for (size_t di = 0; di < mesh->blend_deformers.count; di++) {
ufbx_blend_deformer *deformer = mesh->blend_deformers.data[di];
for (size_t ci = 0; ci < deformer->channels.count; ci++) {
ufbx_blend_channel *chan = deformer->channels.data[ci];
if (chan->keyframes.count == 0) continue;
if (num_blend_shapes < MAX_BLEND_SHAPES) {
blend_channels[num_blend_shapes] = chan;
vmesh->blend_channel_indices[num_blend_shapes] = (int32_t)chan->typed_id;
num_blend_shapes++;
}
}
}
if (num_blend_shapes > 0) {
vmesh->blend_shape_image = pack_blend_channels_to_image(mesh, blend_channels, num_blend_shapes);
vmesh->num_blend_shapes = num_blend_shapes;
}
// Our shader supports only a single material per draw call so we need to split the mesh
// into parts by material. `ufbx_mesh_material` contains a handy compact list of faces
// that use the material which we use here.
for (size_t pi = 0; pi < mesh->materials.count; pi++) {
ufbx_mesh_material *mesh_mat = &mesh->materials.data[pi];
if (mesh_mat->num_triangles == 0) continue;
viewer_mesh_part *part = &parts[num_parts++];
size_t num_indices = 0;
// First fetch all vertices into a flat non-indexed buffer, we also need to
// triangulate the faces
for (size_t fi = 0; fi < mesh_mat->num_faces; fi++) {
ufbx_face face = mesh->faces.data[mesh_mat->face_indices.data[fi]];
size_t num_tris = ufbx_triangulate_face(tri_indices, num_tri_indices, mesh, face);
ufbx_vec2 default_uv = { 0 };
// Iterate through every vertex of every triangle in the triangulated result
for (size_t vi = 0; vi < num_tris * 3; vi++) {
uint32_t ix = tri_indices[vi];
mesh_vertex *vert = &vertices[num_indices];
ufbx_vec3 pos = ufbx_get_vertex_vec3(&mesh->vertex_position, ix);
ufbx_vec3 normal = ufbx_get_vertex_vec3(&mesh->vertex_normal, ix);
ufbx_vec2 uv = mesh->vertex_uv.exists ? ufbx_get_vertex_vec2(&mesh->vertex_uv, ix) : default_uv;
vert->position = ufbx_to_um_vec3(pos);
vert->normal = um_normalize3(ufbx_to_um_vec3(normal));
vert->uv = ufbx_to_um_vec2(uv);
vert->f_vertex_index = (float)mesh->vertex_indices.data[ix];
// The skinning vertex stream is pre-calculated above so we just need to
// copy the right one by the vertex index.
if (skin) {
skin_vertices[num_indices] = mesh_skin_vertices[mesh->vertex_indices.data[ix]];
}
num_indices++;
}
}
ufbx_vertex_stream streams[2];
size_t num_streams = 1;
streams[0].data = vertices;
streams[0].vertex_size = sizeof(mesh_vertex);
if (skin) {
streams[1].data = skin_vertices;
streams[1].vertex_size = sizeof(skin_vertex);
num_streams = 2;
}
// Optimize the flat vertex buffer into an indexed one. `ufbx_generate_indices()`
// compacts the vertex buffer and returns the number of used vertices.
ufbx_error error;
size_t num_vertices = ufbx_generate_indices(streams, num_streams, indices, num_indices, NULL, &error);
if (error.type != UFBX_ERROR_NONE) {
print_error(&error, "Failed to generate index buffer");
exit(1);
}
// To unify code we use `ufbx_load_opts.allow_null_material` to make ufbx create a
// `ufbx_mesh_material` even if there are no materials, so it might be `NULL` here.
part->num_indices = num_indices;
if (mesh_mat->material) {
part->material_index = (int32_t)mesh_mat->material->typed_id;
} else {
part->material_index = -1;
}
// Create the GPU buffers from the temporary `vertices` and `indices` arrays
part->index_buffer = sg_make_buffer(&(sg_buffer_desc){
.size = num_indices * sizeof(uint32_t),
.type = SG_BUFFERTYPE_INDEXBUFFER,
.data = { indices, num_indices * sizeof(uint32_t) },
});
part->vertex_buffer = sg_make_buffer(&(sg_buffer_desc){
.size = num_vertices * sizeof(mesh_vertex),
.type = SG_BUFFERTYPE_VERTEXBUFFER,
.data = { vertices, num_vertices * sizeof(mesh_vertex) },
});
if (vmesh->skinned) {
part->skin_buffer = sg_make_buffer(&(sg_buffer_desc){
.size = num_vertices * sizeof(skin_vertex),
.type = SG_BUFFERTYPE_VERTEXBUFFER,
.data = { skin_vertices, num_vertices * sizeof(skin_vertex) },
});
}
}
// Free the temporary buffers
free(tri_indices);
free(vertices);
free(skin_vertices);
free(mesh_skin_vertices);
free(indices);
// Compute bounds from the vertices
vmesh->aabb_is_local = mesh->skinned_is_local;
vmesh->aabb_min = um_dup3(+INFINITY);
vmesh->aabb_max = um_dup3(-INFINITY);
for (size_t i = 0; i < mesh->num_vertices; i++) {
um_vec3 pos = ufbx_to_um_vec3(mesh->skinned_position.values.data[i]);
vmesh->aabb_min = um_min3(vmesh->aabb_min, pos);
vmesh->aabb_max = um_max3(vmesh->aabb_max, pos);
}
vmesh->parts = parts;
vmesh->num_parts = num_parts;
}
void read_blend_channel(viewer_blend_channel *vchan, ufbx_blend_channel *chan)
{
vchan->weight = (float)chan->weight;
}
void read_node_anim(viewer_anim *va, viewer_node_anim *vna, ufbx_anim_stack *stack, ufbx_node *node)
{
vna->rot = alloc(um_quat, va->num_frames);
vna->pos = alloc(um_vec3, va->num_frames);
vna->scale = alloc(um_vec3, va->num_frames);
bool const_rot = true, const_pos = true, const_scale = true;
// Sample the node's transform evenly for the whole animation stack duration
for (size_t i = 0; i < va->num_frames; i++) {
double time = stack->time_begin + (double)i / va->framerate;
ufbx_transform transform = ufbx_evaluate_transform(&stack->anim, node, time);
vna->rot[i] = ufbx_to_um_quat(transform.rotation);
vna->pos[i] = ufbx_to_um_vec3(transform.translation);
vna->scale[i] = ufbx_to_um_vec3(transform.scale);
if (i > 0) {
// Negated quaternions are equivalent, but interpolating between ones of different
// polarity takes a the longer path, so flip the quaternion if necessary.
if (um_quat_dot(vna->rot[i], vna->rot[i - 1]) < 0.0f) {
vna->rot[i] = um_quat_neg(vna->rot[i]);
}
// Keep track of which channels are constant for the whole animation as an optimization
if (!um_quat_equal(vna->rot[i - 1], vna->rot[i])) const_rot = false;
if (!um_equal3(vna->pos[i - 1], vna->pos[i])) const_pos = false;
if (!um_equal3(vna->scale[i - 1], vna->scale[i])) const_scale = false;
}
}
if (const_rot) { vna->const_rot = vna->rot[0]; free(vna->rot); vna->rot = NULL; }
if (const_pos) { vna->const_pos = vna->pos[0]; free(vna->pos); vna->pos = NULL; }
if (const_scale) { vna->const_scale = vna->scale[0]; free(vna->scale); vna->scale = NULL; }
}
void read_blend_channel_anim(viewer_anim *va, viewer_blend_channel_anim *vbca, ufbx_anim_stack *stack, ufbx_blend_channel *chan)
{
vbca->weight = alloc(float, va->num_frames);
bool const_weight = true;
// Sample the blend weight evenly for the whole animation stack duration
for (size_t i = 0; i < va->num_frames; i++) {
double time = stack->time_begin + (double)i / va->framerate;
ufbx_real weight = ufbx_evaluate_blend_weight(&stack->anim, chan, time);
vbca->weight[i] = (float)weight;
// Keep track of which channels are constant for the whole animation as an optimization
if (i > 0) {
if (vbca->weight[i - 1] != vbca->weight[i]) const_weight = false;
}
}
if (const_weight) { vbca->const_weight = vbca->weight[0]; free(vbca->weight); vbca->weight = NULL; }
}
void read_anim_stack(viewer_anim *va, ufbx_anim_stack *stack, ufbx_scene *scene)
{
const float target_framerate = 30.0f;
const size_t max_frames = 4096;
// Sample the animation evenly at `target_framerate` if possible while limiting the maximum
// number of frames to `max_frames` by potentially dropping FPS.
float duration = (float)stack->time_end - (float)stack->time_begin;
size_t num_frames = clamp_sz((size_t)(duration * target_framerate), 2, max_frames);
float framerate = (float)(num_frames - 1) / duration;
va->name = alloc_dup(char, stack->name.length + 1, stack->name.data);
va->time_begin = (float)stack->time_begin;
va->time_end = (float)stack->time_end;
va->framerate = framerate;
va->num_frames = num_frames;
// Sample the animations of all nodes and blend channels in the stack
va->nodes = alloc(viewer_node_anim, scene->nodes.count);
for (size_t i = 0; i < scene->nodes.count; i++) {
ufbx_node *node = scene->nodes.data[i];
read_node_anim(va, &va->nodes[i], stack, node);
}
va->blend_channels = alloc(viewer_blend_channel_anim, scene->blend_channels.count);
for (size_t i = 0; i < scene->blend_channels.count; i++) {
ufbx_blend_channel *chan = scene->blend_channels.data[i];
read_blend_channel_anim(va, &va->blend_channels[i], stack, chan);
}
}
void read_scene(viewer_scene *vs, ufbx_scene *scene)
{
vs->num_nodes = scene->nodes.count;
vs->nodes = alloc(viewer_node, vs->num_nodes);
for (size_t i = 0; i < vs->num_nodes; i++) {
read_node(&vs->nodes[i], scene->nodes.data[i]);
}
vs->num_meshes = scene->meshes.count;
vs->meshes = alloc(viewer_mesh, vs->num_meshes);
for (size_t i = 0; i < vs->num_meshes; i++) {
read_mesh(&vs->meshes[i], scene->meshes.data[i]);
}
vs->num_blend_channels = scene->blend_channels.count;
vs->blend_channels = alloc(viewer_blend_channel, vs->num_blend_channels);
for (size_t i = 0; i < vs->num_blend_channels; i++) {
read_blend_channel(&vs->blend_channels[i], scene->blend_channels.data[i]);
}
vs->num_animations = scene->anim_stacks.count;
vs->animations = alloc(viewer_anim, vs->num_animations);
for (size_t i = 0; i < vs->num_animations; i++) {
read_anim_stack(&vs->animations[i], scene->anim_stacks.data[i], scene);
}
}
void update_animation(viewer_scene *vs, viewer_anim *va, float time)
{
float frame_time = (time - va->time_begin) * va->framerate;
size_t f0 = min_sz((size_t)frame_time + 0, va->num_frames - 1);
size_t f1 = min_sz((size_t)frame_time + 1, va->num_frames - 1);
float t = um_min(frame_time - (float)f0, 1.0f);
for (size_t i = 0; i < vs->num_nodes; i++) {
viewer_node *vn = &vs->nodes[i];
viewer_node_anim *vna = &va->nodes[i];
um_quat rot = vna->rot ? um_quat_lerp(vna->rot[f0], vna->rot[f1], t) : vna->const_rot;
um_vec3 pos = vna->pos ? um_lerp3(vna->pos[f0], vna->pos[f1], t) : vna->const_pos;
um_vec3 scale = vna->scale ? um_lerp3(vna->scale[f0], vna->scale[f1], t) : vna->const_scale;
vn->node_to_parent = um_mat_trs(pos, rot, scale);
}
for (size_t i = 0; i < vs->num_blend_channels; i++) {
viewer_blend_channel *vbc = &vs->blend_channels[i];
viewer_blend_channel_anim *vbca = &va->blend_channels[i];
vbc->weight = vbca->weight ? um_lerp(vbca->weight[f0], vbca->weight[f1], t) : vbca->const_weight;
}
}
void update_hierarchy(viewer_scene *vs)
{
for (size_t i = 0; i < vs->num_nodes; i++) {
viewer_node *vn = &vs->nodes[i];
// ufbx stores nodes in order where parent nodes always precede child nodes so we can
// evaluate the transform hierarchy with a flat loop.
if (vn->parent_index >= 0) {
vn->node_to_world = um_mat_mul(vs->nodes[vn->parent_index].node_to_world, vn->node_to_parent);
} else {
vn->node_to_world = vn->node_to_parent;
}
vn->geometry_to_world = um_mat_mul(vn->node_to_world, vn->geometry_to_node);
vn->normal_to_world = um_mat_transpose(um_mat_inverse(vn->geometry_to_world));
}
}
void init_pipelines(viewer *view)
{
sg_backend backend = sg_query_backend();
view->shader_mesh_lit_static = sg_make_shader(static_lit_shader_desc(backend));
view->pipe_mesh_lit_static = sg_make_pipeline(&(sg_pipeline_desc){
.shader = view->shader_mesh_lit_static,
.layout = mesh_vertex_layout,
.index_type = SG_INDEXTYPE_UINT32,
.face_winding = SG_FACEWINDING_CCW,
.cull_mode = SG_CULLMODE_BACK,
.depth = {
.compare = SG_COMPAREFUNC_LESS_EQUAL,
.write_enabled = true,
},
});
view->shader_mesh_lit_skinned = sg_make_shader(skinned_lit_shader_desc(backend));
view->pipe_mesh_lit_skinned = sg_make_pipeline(&(sg_pipeline_desc){
.shader = view->shader_mesh_lit_skinned,
.layout = skinned_mesh_vertex_layout,
.index_type = SG_INDEXTYPE_UINT32,
.face_winding = SG_FACEWINDING_CCW,
.cull_mode = SG_CULLMODE_BACK,
.depth = {
.compare = SG_COMPAREFUNC_LESS_EQUAL,
.write_enabled = true,
},
});
um_vec4 empty_blend_shape_data = { 0 };
view->empty_blend_shape_image = sg_make_image(&(sg_image_desc){
.type = SG_IMAGETYPE_ARRAY,
.width = 1,
.height = 1,
.num_slices = 1,
.pixel_format = SG_PIXELFORMAT_RGBA32F,
.data.subimage[0][0] = SG_RANGE(empty_blend_shape_data),
});
}
void load_scene(viewer_scene *vs, const char *filename)
{
ufbx_load_opts opts = {
.load_external_files = true,
.allow_null_material = true,
.generate_missing_normals = true,
// NOTE: We use this _only_ for computing the bounds of the scene!
// The viewer contains a proper implementation of skinning as well.
// You probably don't need this.
.evaluate_skinning = true,
.target_axes = {
.right = UFBX_COORDINATE_AXIS_POSITIVE_X,
.up = UFBX_COORDINATE_AXIS_POSITIVE_Y,
.front = UFBX_COORDINATE_AXIS_POSITIVE_Z,
},
.target_unit_meters = 1.0f,
};
ufbx_error error;
ufbx_scene *scene = ufbx_load_file(filename, &opts, &error);
if (!scene) {
print_error(&error, "Failed to load scene");
exit(1);
}
read_scene(vs, scene);
// Compute the world-space bounding box
vs->aabb_min = um_dup3(+INFINITY);
vs->aabb_max = um_dup3(-INFINITY);
for (size_t mesh_ix = 0; mesh_ix < vs->num_meshes; mesh_ix++) {
viewer_mesh *mesh = &vs->meshes[mesh_ix];
um_vec3 aabb_origin = um_mul3(um_add3(mesh->aabb_max, mesh->aabb_min), 0.5f);
um_vec3 aabb_extent = um_mul3(um_sub3(mesh->aabb_max, mesh->aabb_min), 0.5f);
if (mesh->aabb_is_local) {
for (size_t inst_ix = 0; inst_ix < mesh->num_instances; inst_ix++) {
viewer_node *node = &vs->nodes[mesh->instance_node_indices[inst_ix]];
um_vec3 world_origin = um_transform_point(&node->geometry_to_world, aabb_origin);
um_vec3 world_extent = um_transform_extent(&node->geometry_to_world, aabb_extent);
vs->aabb_min = um_min3(vs->aabb_min, um_sub3(world_origin, world_extent));
vs->aabb_max = um_max3(vs->aabb_max, um_add3(world_origin, world_extent));
}
} else {
vs->aabb_min = um_min3(vs->aabb_min, mesh->aabb_min);
vs->aabb_max = um_max3(vs->aabb_max, mesh->aabb_max);
}
}
ufbx_free_scene(scene);
}
bool backend_uses_d3d_perspective(sg_backend backend)
{
switch (backend) {
case SG_BACKEND_GLCORE33: return false;
case SG_BACKEND_GLES2: return false;
case SG_BACKEND_GLES3: return false;
case SG_BACKEND_D3D11: return true;
case SG_BACKEND_METAL_IOS: return true;
case SG_BACKEND_METAL_MACOS: return true;
case SG_BACKEND_METAL_SIMULATOR: return true;
case SG_BACKEND_WGPU: return true;
case SG_BACKEND_DUMMY: return false;
default: assert(0 && "Unhandled backend"); return false;
}
}
void update_camera(viewer *view)
{
viewer_scene *vs = &view->scene;
um_vec3 aabb_origin = um_mul3(um_add3(vs->aabb_max, vs->aabb_min), 0.5f);
um_vec3 aabb_extent = um_mul3(um_sub3(vs->aabb_max, vs->aabb_min), 0.5f);
float distance = 2.5f * powf(2.0f, view->camera_distance) * um_max(um_max(aabb_extent.x, aabb_extent.y), aabb_extent.z);
um_quat camera_rot = um_quat_mul(
um_quat_axis_angle(um_v3(0,1,0), view->camera_yaw * UM_DEG_TO_RAD),
um_quat_axis_angle(um_v3(1,0,0), view->camera_pitch * UM_DEG_TO_RAD));
um_vec3 camera_target = aabb_origin;
um_vec3 camera_direction = um_quat_rotate(camera_rot, um_v3(0,0,1));
um_vec3 camera_pos = um_add3(camera_target, um_mul3(camera_direction, distance));
view->world_to_view = um_mat_look_at(camera_pos, camera_target, um_v3(0,1,0));
float fov = 50.0f * UM_DEG_TO_RAD;
float aspect = (float)sapp_width() / (float)sapp_height();
float near_plane = um_min(distance * 0.001f, 0.1f);
float far_plane = um_max(distance * 2.0f, 100.0f);
if (backend_uses_d3d_perspective(sg_query_backend())) {
view->view_to_clip = um_mat_perspective_d3d(fov, aspect, near_plane, far_plane);
} else {
view->view_to_clip = um_mat_perspective_gl(fov, aspect, near_plane, far_plane);
}
view->world_to_clip = um_mat_mul(view->view_to_clip, view->world_to_view);
}
void draw_mesh(viewer *view, viewer_node *node, viewer_mesh *mesh)
{
sg_image blend_shapes = mesh->num_blend_shapes > 0 ? mesh->blend_shape_image : view->empty_blend_shape_image;
if (mesh->skinned) {
sg_apply_pipeline(view->pipe_mesh_lit_skinned);
skin_vertex_ubo_t skin_ubo = { 0 };
for (size_t i = 0; i < mesh->num_bones; i++) {
viewer_node *bone = &view->scene.nodes[mesh->bone_indices[i]];
skin_ubo.bones[i] = um_mat_mul(bone->node_to_world, mesh->bone_matrices[i]);
}
sg_apply_uniforms(SG_SHADERSTAGE_VS, SLOT_skin_vertex_ubo, SG_RANGE_REF(skin_ubo));
} else {
sg_apply_pipeline(view->pipe_mesh_lit_static);
}
mesh_vertex_ubo_t mesh_ubo = {
.geometry_to_world = node->geometry_to_world,
.normal_to_world = node->normal_to_world,
.world_to_clip = view->world_to_clip,
.f_num_blend_shapes = (float)mesh->num_blend_shapes,
};
// sokol-shdc only supports vec4 arrays so reinterpret this `um_vec4` array as `float`
float *blend_weights = (float*)mesh_ubo.blend_weights;
for (size_t i = 0; i < mesh->num_blend_shapes; i++) {
blend_weights[i] = view->scene.blend_channels[mesh->blend_channel_indices[i]].weight;
}
sg_apply_uniforms(SG_SHADERSTAGE_VS, SLOT_mesh_vertex_ubo, SG_RANGE_REF(mesh_ubo));
for (size_t pi = 0; pi < mesh->num_parts; pi++) {
viewer_mesh_part *part = &mesh->parts[pi];
sg_bindings binds = {
.vertex_buffers[0] = part->vertex_buffer,
.vertex_buffers[1] = part->skin_buffer,
.index_buffer = part->index_buffer,
.vs_images[SLOT_blend_shapes] = blend_shapes,
};
sg_apply_bindings(&binds);
sg_draw(0, (int)part->num_indices, 1);
}
}
void draw_scene(viewer *view)
{
for (size_t mi = 0; mi < view->scene.num_meshes; mi++) {
viewer_mesh *mesh = &view->scene.meshes[mi];
for (size_t ni = 0; ni < mesh->num_instances; ni++) {
viewer_node *node = &view->scene.nodes[mesh->instance_node_indices[ni]];
draw_mesh(view, node, mesh);
}
}
}
viewer g_viewer;
const char *g_filename;
void init(void)
{
sg_setup(&(sg_desc){
.context = sapp_sgcontext(),
.buffer_pool_size = 4096,
.image_pool_size = 4096,
});
stm_setup();
init_pipelines(&g_viewer);
load_scene(&g_viewer.scene, g_filename);
}
void onevent(const sapp_event *e)
{
viewer *view = &g_viewer;
switch (e->type) {
case SAPP_EVENTTYPE_MOUSE_DOWN:
view->mouse_buttons |= 1u << (uint32_t)e->mouse_button;
break;
case SAPP_EVENTTYPE_MOUSE_UP:
view->mouse_buttons &= ~(1u << (uint32_t)e->mouse_button);
break;
case SAPP_EVENTTYPE_UNFOCUSED:
view->mouse_buttons = 0;
break;
case SAPP_EVENTTYPE_MOUSE_MOVE:
if (view->mouse_buttons & 1) {
float scale = um_min((float)sapp_width(), (float)sapp_height());
view->camera_yaw -= e->mouse_dx / scale * 180.0f;
view->camera_pitch -= e->mouse_dy / scale * 180.0f;
view->camera_pitch = um_clamp(view->camera_pitch, -89.0f, 89.0f);
}
break;
case SAPP_EVENTTYPE_MOUSE_SCROLL:
view->camera_distance += e->scroll_y * -0.02f;
view->camera_distance = um_clamp(view->camera_distance, -5.0f, 5.0f);
break;
default:
break;
}
}
void frame(void)
{
static uint64_t last_time;
float dt = (float)stm_sec(stm_laptime(&last_time));
dt = um_min(dt, 0.1f);
viewer_anim *anim = g_viewer.scene.num_animations > 0 ? &g_viewer.scene.animations[0] : NULL;
if (anim) {
g_viewer.anim_time += dt;
if (g_viewer.anim_time >= anim->time_end) {
g_viewer.anim_time -= anim->time_end - anim->time_begin;
}
update_animation(&g_viewer.scene, anim, g_viewer.anim_time);
}
update_camera(&g_viewer);
update_hierarchy(&g_viewer.scene);
sg_pass_action action = {
.colors[0] = {
.action = SG_ACTION_CLEAR,
.value = { 0.1f, 0.1f, 0.2f },
},
};
sg_begin_default_pass(&action, sapp_width(), sapp_height());
draw_scene(&g_viewer);
sg_end_pass();
sg_commit();
}
void cleanup(void)
{
sg_shutdown();
}
sapp_desc sokol_main(int argc, char* argv[]) {
if (argc <= 1) {
fprintf(stderr, "Usage: viewer file.fbx\n");
exit(1);
}
g_filename = argv[1];
return (sapp_desc){
.init_cb = &init,
.event_cb = &onevent,
.frame_cb = &frame,
.cleanup_cb = &cleanup,
.width = 800,
.height = 600,
.sample_count = 4,
.window_title = "ufbx viewer",
};
}