Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️
dangoldin 5 daysReload
Building a data pipeline using Snowflake primitives

dangoldin 11 daysReload
You don’t need to. dbt/sqlmesh are competitive. I just like the model of sqlmesh over dbt but dbt is much more dominant.

dangoldin 11 daysReload
From egress + storage cost standpoint absolutely which ends up being a big factor for these large scale data systems.

There’s a prior discussion on HN about that post: https://news.ycombinator.com/item?id=38118577

And full disclosure but I’m author of both posts - just shifted my writing to be more focused on the company one.


dangoldin 11 daysReload
Author here. Basic idea is you want some way of defining metrics. So something like “revenue = sum(sales) - sum(discount)” or “retention = whatever” which need to be generated via SQL at query time vs built in to a table. Then you can have higher confidence multiple access paths all have the same definitions for the metrics.

dangoldin 12 daysReload
Yes but most data-heavy tasks are parallelizable. SQL itself is naturally parallelizable. There's a reason Apache RAPIDs, Voltron, Kinetica, Sqream, etc exist.

Full transparency I don't have huge amount of experience at working on this massive scale and to your point you need to understand the problem and constraints before you propose a solution.