Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
As far as differences go:
Microprofile:
- frame-based
- needs a build system
- memory usage starts at 2 MB per thread
- runs 2 threads of its own
- provides system-specific info
- good support for GPU workloads
- provides live view
- seems like a good choice for gamedev / rendering
utl::profiler: - no specific pipeline
- single include
- memory usage starts at approx. nothing and would likely stay in kilobytes
- doesn't run any additional threads
- fully portable, nothing platform specific whatsoever, just standard C++
- doesn't provide system-specific info, just pure timings
- seems like a good choice for small projects or embedded (since the only thing it needs is a C++ compiler)
I often found myself wondering "how much does this code segment take in terms of total runtime" and it's often quite annoying to figure out with optimizations enabled, especially when working on something new or testing someone else's implementation without the proper tooling set up. Wanted to have a single include lib that would allow us to write something like:
``` PROFILE("Loop 1") for (...) // some work ```
and have the next expression automatically record time & dump results to a table. Wrote a few macros to do exactly that a few months back, but they were primitive and basically unusable for recursive code.
Tried to come up with a more generic solution that would build a call graph for nested profiler-macros, handle threads and etc. but doing so in a naive way would be super slow since we'd need some kind of a recursive map of nodes with callsites as a keys.
Recently had a revelation that it is possible to use macro-generated thread_local's to associate callsites with integer IDs on the fly and with some effort call graph can be neatly encoded in a few contiguous arrays with all graph building & traversal logic reduced to simple checks and array lookups. Realized threading can be quite easily supported too in an almost lock-free fashion.
After a few days of effort ended up building what I believe is a very much usable single-header profiling lib. Couldn't find anything quite like it, so I'd like to present it here and hear some opinions on the product:
https://github.com/DmitriBogdanov/UTL/blob/master/docs/modul...
About linear IDs: A call graph in general case is a tree of nodes, each node has a single parent and an arbitrary amount of children. Each node accumulates time spend in the "lower" branches. A neat property of the callgraph relative to a generic tree, is that every node can be associated with a callsite. For example, if a some function f() calls itself 3 recursively, there will be multiple nodes corresponding to it, but in terms of callsite there is still only one. So lets take some simple call graph as an example:
Let's say f() has callsite id '0', g() has callsite id '1', h() has callsite id '2'. The callgraph will then consist of N=5 nodes with M=3 different callsites: We can then encode all "prev."" nodes as a single N vector, and all "next" nodes as a MxN matrix, which has some kind of sentinel value (like -1) in places with no connection. For this example this will result in following: Every thread has a thread-local callgraph object that keeps track of all this graph traversal, it holds 'current_node_id'. Traversing backwards on the graph is a single array lookup: Traversing forwards to an existing callgraph node is a lookup & branch: New nodes can be created pretty cheaply too, but too verbose for a comment. The key to tracking the callsites and assigning them IDs is thread_local local variables generated by the macro:https://github.com/DmitriBogdanov/UTL/blob/master/include/UT...
When callsite marker initializes (which only happens once), it gets a new ID. Timer then gets this 'callsite_id' an passes it to the forwards-traversal. The way we get function names is by simply remembering __FILE__, __func__, __LINE__ pointers in another array of the call graph, they get saved during the callsite marker initialization too. As far as performance goes everything we do is cheap & simple operations, at this point the main overhead is just from taking the timestamps.