Founder's blog https://www.jitbit.com/ Blog by Alex Yumashev, founder and CEO of Jitbit Software, the company behind Jitbit Helpdesk en-us http://blogs.law.harvard.edu/tech/rss Jitbit RSS-Generator 1.1 Fri, 03 Mar 2023 10:20:27 GMT Fri, 03 Mar 2023 10:20:27 GMT https://www.jitbit.com/alexblog/chatgpt-versus-google/ https://www.jitbit.com/alexblog/chatgpt-versus-google/ ChatGPT won't "kill Google" because Google is already dead <p>I have just conducted an experiment where I forced myself to use Bing's new chat-based search for almost a week, and spoiler alert: I loved it. But I'll get into that later.</p><!--more--> <h2>Googling without Google</h2> <p>When was the last time you searched Google and found the answer on the first page without having to refine your query to get rid of all the "content marketing" b/s? Or those worthless websites that rank "listicles" based on who paid the most (looking at you, Capterra)?</p> <p>I analyzed my own search habits and then reached out and interviewed about 14 people I know personally, mostly developers and founders. It turns out that this is how most of us search for information online:</p> <p>For <b>consumer product reviews</b>, such as "Is this smartwatch any good?" or "What's the best mechanical keyboard in 2023?", we go to <b>YouTube, Amazon, or Reddit</b>.</p> <p>For <b>practical questions</b> like "How do I replace a gas boiler condensate pump?", we go to <b>YouTube</b>.</p> <p>For <b>"fun" questions</b> like "What's the coolest MTB trail in Tignes, France?" or "What's the most scenic route from L.A. to Yosemite?" it's <b>Instagram or YouTube</b>, again.</p> <p>Now where does this put <b>B2B product recommendations</b>?</p> <h2>What does this mean for us, founders?</h2> <p>I'm not just a searcher. I also run a B2B-company that wants to be discovered. Where does <i>that</i> happen?</p> <p>Turns out it's mostly <b>private communities</b>. This is also confirmed by all the customer interviews we conduct asking how people discovered our product, and looking at our funnel analytics. When someone needs new software in their tech stack, they turn to closed Slack groups, private forums, Discords, Telegram/Whatsapp chats to ask fellow founders and CTOs - what do they use, for example, for transactional emails? And if there's no suitable private community within the reach, it's usually <b>Reddit</b> or <b>Hackernews</b>.</p> <p>This is significant in terms of how small software entrepreneurs should adjust their marketing campaigns going forward. Nowadays, hardly anyone buys the product after searching for a generic term or landing from a PPC/referral campaign. Instead, the majority of customers learn about the product through recommendations from friends, peers, colleagues, clients, and then visit our website directly or search for the brand name on Google. We should stop running in the SEO hamster wheel and instead invest in product-led growth, viral loops, and features that amplify "word of mouth". However, this is a broad topic that requires a separate essay.</p> <h2>The one thing I still use(d) Google for</h2> <p>The only thing I kept using Google for is... coding questions. Error messages, class & method names, workarounds and snippets. Google is very good at that. And guess what? Bing's new chat-based search is even better. When I paste an error message it first finds all the pages that mention it, then analyzes what's common about them, then compiles a solution from all the results discovered, and gives me the answer citing the sources if I need more info.</p> <p>P.S. Check out how Jitbit Helpdesk <a href="https://www.jitbit.com/helpdesk/helpdesk-chatgpt-integration"></a>integrates with ChatGPT</p> Fri, 03 Mar 2023 10:20:27 GMT https://www.jitbit.com/alexblog/tailwind/ https://www.jitbit.com/alexblog/tailwind/ I really wanted to like Tailwind CSS <h2>TL;DR</h2> <p>Nobody:</p> <p>Absolutely no one:</p> <p>Me: Here's what I think about Tailwind CSS!</p> <!--more--> <h2>First, a tip of the hat</h2> <p>Let's get one thing out of the way: Tailwind CSS <u><b>is</b></u> great.</p> <p>For starters, Tailwind is a very polished and well-thought-out <i>product</i>. As a fellow bootstrapper - I tip my hat. The docs are amazing, the examples are great, Tailwind UI Kit is a life saver and the "Refactoring UI" book is a must read. And I'm an extremely satisfied paying customer for Adam's &amp; Steve's stuff.</p> <p>Tailwind is also great as a tool - it's a perfect UI builder for <i>new projects</i>. Creating stuff from scratch with Tailwind is just ah-amazing. Heck, I have actually just redesigned this very website using Tailwind. You're in the flow, in the zone, wired in, out of this world, creating stuff.</p> <p>Think of it as a visual editor - "Figma for developers" - a graphics design tool without leaving your code editor. Playing with default classes, trying stuff, re-trying stuff, then throwing in some more stuff... Hey, even Tailwind's landing page advertises this very approach in their hero video. You start with a bunch of unstyled mess and work from there. Sounds grrreat.</p> <p>And those are exactly the two main reasons developers loooooove Tailwind: because (A) - developers love writing and rewriting everything <i>from scratch</i> (oh yeah, writing code is so much easier than reading code). And (B) - developers love doing stuff without leaving their code editors. And also, maybe (C) - developers tend to think about UI at the last minute. "Hey, I just hacked a cool project, now I need some UI for it - what can I use to turn this messy Times-New-Roman ugliness into something decent, fast?"</p> <p>I get it, not everyone is an experienced front-end dev (I'm surely not one!), who loves polishing and re-polishing the UI, pixel by pixel, color shade by color shade... Screw that, just give us a flexible system with some nice-looking defaults.</p> <h2>However</h2> <p>Looking at what Tailwind has grown into by version 3.2, I have some concerns against using Tailwind in <i>big projects</i>.</p> <h2>Tailwind takes up 70% of my markup...<br> ...even after I'm done with the design</h2> <p>While Tailwind makes writing CSS fast and enjoyable, the biggest problem is that <b>once I'm done with CSS - Tailwind is still in my face</b>. Staring at me even after I'm done with the design. Making everything that's <i>not CSS</i> - content, JS, markup etc - much harder to work with. It takes valuable space, and going back to a file that looks like this causes me physical pain:</p> <pre>&lt;div class=&quot;min-h-full bg-white px-4 py-16 sm:px-6 sm:py-24 md:grid md:place-items-center lg:px-8&quot;&gt; &lt;div class=&quot;mx-auto max-w-max&quot;&gt; &lt;main class=&quot;sm:flex&quot;&gt; &lt;p class=&quot;text-4xl font-bold tracking-tight text-indigo-600 sm:text-5xl&quot;&gt;400&lt;/p&gt; &lt;div class=&quot;sm:ml-6&quot;&gt; &lt;div class=&quot;sm:border-l sm:border-gray-200 sm:pl-6&quot;&gt; &lt;h1 class=&quot;text-4xl font-bold tracking-tight text-gray-900 sm:text-5xl&quot;&gt;Error&lt;/h1&gt; &lt;/div&gt; &lt;div class=&quot;mt-10 flex space-x-3 sm:border-l sm:border-transparent sm:pl-6&quot;&gt; &lt;a href=&quot;#&quot; class=&quot;inline-flex items-center rounded-md border border-transparent bg-indigo-600 px-4 py-2 text-sm font-medium text-white shadow-sm hover:bg-indigo-700 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:ring-offset-2&quot;&gt;Go back home&lt;/a&gt; &lt;a href=&quot;#&quot; class=&quot;inline-flex items-center rounded-md border border-transparent bg-indigo-100 px-4 py-2 text-sm font-medium text-indigo-700 hover:bg-indigo-200 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:ring-offset-2&quot;&gt;Contact support&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;/main&gt; &lt;/div&gt; &lt;/div&gt;</pre> <p>I just wanted to change that "Error" header text, now where is that exactly? Do you see it?</p> <p>Look, Tailwind, I like you, we've had a great time together, but I'm done with the design, I'm moving on, stop staring at me and get out of my face.</p> <p>HTML pollution is not a huge problem when coding landing pages and websites like this one you're reading right now. Because marketing websites are 99% design and only 1% code (and that 1% code all sits in the "contact us" form). But in a complex app with with hundreds of thousands LOC of business logic...</p> <h2>Maintainability &amp; discoverability</h2> <p>Let's say you already have a huge app and you need to fix a minor UI bug. Some button looks odd or something.</p> <p>Since Tailwind is an abstraction over CSS, <strong>it adds extra steps to reverse engineering</strong>. Which is what debugging essentially is - reverse engineering your own code. Working backwards from an unexpected result to the reason behind it.</p> <p>Say, you haven't touched your CSS in months. You can't just "remember" what might be wrong with that button. So you fire up your dev-tools in the browser, right-click the offender, "inspect element" and notice some strange styling that comes from... the compiled "output.css". But where the heck did it come from exactly? Is it a TW utility class we used on the element via <code>@apply</code>? Is it the "forms" plugin? Is it the "prose" plugin? Did someone override the default "theme" in "tailwind.config.js"? Or may be it's the default "preflight" base styling that TW applies? And where is that configured exactly, the "tailwind.config.js"? NOPE, turns out our new developer has added a <code>@layer</code> override in "input.css". OK, I think I found the reason, let's try the fix now! Opening my HTML, locating the element in a <i>wall</i> of utility classes, applyting the fix, done.</p> <p>Refreshing the browser. Wait, nothing changed. Was it the wrong fix? Oh, wait, maybe it's the npm-based "JIT" watcher that's not working? Let me see... Yep, it's down. The npm process has either crashed <i>again</i> with the "JavaScript heap out of memory" error. Or maybe the npm-script does not work on WSL2 because I'm currently on my Windows laptop. Or maybe I'm using a code-editor that we haven't configured the build system for (we have it for VsCode and VS-2022, but not for Sublime). Anyway, let's just launch it manually (googling-googling-found it) <code>npx blah-bleh --WATCH</code> thanks StackOverflow. I really hope the fix works, because I don't want to start over.</p> <p>By the way, did I mention that Tailwind CSS is <i>the only freaking reason</i> we have <code>npm</code> in this project? Yeah, not everyone uses JavaScript. People also use C#, Java, Rails, Go, Python and even (wait for it...) PHP. We're a dotnet shop, we already use nuget, libman, web-compiler, hell, why not throw npm into the mix. With "package.json", build scripts, a 13MB "node-modules" directory and all...</p> <h2>Integrability</h2> <p>Let's say you'd like to try to integrate Tailwind CSS into a big existing project.</p> <ol> <li>The project is <i>big</i>. It has half a million lines of code in 4087 files. Around 500 of those are front-end markup files - HTML views and partials (ouch!)</li> <li>The project already has a theme + design system, also brand-colors, icons (ouch #2)</li> <li>...and a default font, visual hierarchy etc etc (ouch #3)</li> </ol> <p>You basically have two options:</p> <ol> <li>Rewrite everything from scratch (of course)</li> <li>Gradual "incremental" rewrite, starting small and letting Tailwind take over eventually</li> </ol> <p>Let's try option 2. Say I'd like to redesign that ugly looking button. After all Tailwind is just a bunch of utility classes, should be no problem isolating it from the rest of the app, right?</p> <p>Installing Tailwind... done. The project looks awful, obviously, but that's fine. Tailwind has added its default font and a CSS normalizer. Let's override the default font back. Let's also make our existing CSS "Tailwind-friendly" so it does not conflict with all the default <code>box-sizing</code>, <code>height:auto</code>, margins, colors, let's add back the bold-font to all the H1-H6, remove default outlines etc.</p> <p>If that sounds like too much work we can always just disable TW "preflight" completely, but this way the utility classes might look weird.</p> <p>OK, we've added Tailwind, set up the tooling for all the devs (some use VSCode, some use Webstorm, or Rider, or the "big" Visual Studio), and after only 2 days our app looks more or less the same as before.</p> <p>Our project is not heavily "componentized", if we have a button in our code - then it is... just a <code>&lt;button&gt;</code>, not a partial or a component. Sometimes we might add <code>&lt;button class="inactive"&gt;</code> but that's about it. I just ran a search, and we have 378 <code>&lt;buttons&gt;</code> in our code. If I want them all to have rounded corners, I would either have to copy-paste <code>rounded-md</code> 378 times, or convert all my buttons to a component/partial, which sounds like an overkill.</p> <p>Let's go with the <code>@apply</code> directive then. That's exactly what it's for, according to the Tailwind docs "creating a partial for something as small as a button can feel like overkill". Exactly! I'll just rewrite all my buttons using <code>@apply</code>. And now we have basically ended up with a CSS file (again) that uses TW-utilities instead of the "regular" CSS. We now have <code>button { @apply px-4 py-2 rounded-md }</code> instead of <code>button { padding: x y; border-radius: z; }</code>. Good. I don't get how is that better than regular CSS, tho. We still have an external CSS file, we still describe all our elements in it, just with a different syntax now. What have we gained? Aside from the Tailwind's constraints (in a good way) and the ability to use Tailwind UI kit that we paid for.</p> <h2>"You just don't get it, boomer"</h2> <p>Yeah, I must be old. So I went out and searched Hacker News for all the TW-related comments and then went on and listened to 5 podcast episodes. These are the benefits developers mention all the time:</p> <p><strong>1. Developers (apparently) hate having a separate CSS file</strong> They really do, as it turns out. This is a surprise to me. But, well, OK, I'll take that as a valid reason. Kinda ruined if you use @apply tho.</p> <p><strong>2. Developers miss the native-app dev experience</strong> a surprising number of people have literally mentioned this, including Adam Wathan himself. Coming from the native development world (writing desktop or mobile apps for Mac, PC, iOS etc) to the web - they miss the ability to simply click a UI element in the IDE (like a textbox) and then edit its properties in the side bar (bold font, black background etc). <i>Without</i> having to come up with an identifier or a class for the element. <i>Without</i> going to a separate css-file and target the identifier's selector. <i>Without</i> writing actual CSS. <i>Without</i> keeping cross-browser nuances in mind, along with the cascading nature and the "not repeat yourself" rule.</p> <p><strong>3. Devs hate naming stuff</strong> and coming up with class names takes a significant part of their day. Everyone mentions that. A lot.</p> <p>All valid points, I guess. However, <b>we don't write CSS every day</b>. I just looked at our LESS file (yeah, we still use LESS) and pulled up the git history for it. In the last 10 years (yeah, we're <i>that</i> old) it's been modified once every two months on average. See, our product is a long-lasting, profitable, <i>boring</i> app. We get back to CSS only when something breaks or something needs a facelift and that's it. We're OK with having it in a separate file, and coming up with a class name once every two months is not a huge problem.</p> <h2>Tailwind is an ORM</h2> <p>While analyzing Tailwind's pros and cons I realized it kinda reminds me of another "tool" - an ORM. ORM libraries help with database access, so you don't have to "write SQL". They're are great to jump-start an app, build a landing page, a prototype, an MVP, to test a hypothesis, hey, even power up a working product with paying customers for a couple of years. But once your app gets mature and scales up to millions of daily active users - ORM quickly becomes a pain for an experienced DBA. You know, the type of DBA who sees a way to optimize a database query just by glancing at its execution plan XML. You're in deep waters now, you're in the land of database sharding and distributed caches. And now the ORM just gets in the way. Our DBA spends more time fighting it, instead of throwing away an abstraction layer and make things simple again.</p> <p><small>(or wait until someone releases a light weight, non-opinionated "micro" ORM, like "Dapper" from Marc, Nick and Sam - the guys who wrote that website we all use every day called StackOverflow)</small></p> <hr> <p>Coming up in the "grumpy old dev" series: how to build a static website without React + Next.js + build-step + Docker.</p> Tue, 06 Dec 2022 22:11:48 GMT https://www.jitbit.com/alexblog/fast-memory-cache/ https://www.jitbit.com/alexblog/fast-memory-cache/ How I learned to stop worrying and wrote my own memory-cache <p>Here I am, writing about performance optimization <a href="https://www.jitbit.com/alexblog/309-improving-c-performance-with-spant/">again</a>. I'm a performance junkie. Constantly monitoring and investigating bottlenecks for our <a rhef="https://www.jitbit.com/saas-helpdesk/">SaaS helpdesk</a> webapp is my favorite thing ever. And I'm proud to say that with thousands of clients, even some really big ones, our app's backend process rarely goes higher than 5-6% CPU. Even during the peak load times which happen around 2-3pm UTC - the time when "Americas are waking up <i>while</i> Europe is still very active".</p> <!--more--> <blockquote>See, the biggest reason we obsess over performance are <i>deploys</i>. To spare the boring details, when one updates an ASP.NET Core app, the old process is shut down, and HTTP requests are being queued up until the new process is fully launched. This can result in tens of thousands of requests rushing in once the new process is up. And for 10-20 seconds the app becomes slow. Really slow. Now <i>that</i> is the biggest reason we pay so much attention to performance (specifically, startup performance) - so that our customers do not notice any hiccups when we deploy a new version.</blockquote> <p>And recently I found a perfect candidate to optimize</p> <h2>.NET's "MemoryCache"</h2> <p><b>TL;DR</b> "MemoryCache" sucks</p> <p>We cache a lot of stuff in memory, to spare a database call whenever we can. We use Redis as an "L2" cache provider, but for "L1" we use in-memory caching primitives offered by .NET. Basically Microsoft offers two caches: <code>System.Runtime.Caching.MemoryCache</code> and <code>Microsoft.Extensions.Caching.MemoryCache</code>.</p> <p>They both suck and ruin performance.</p> <ul> <li><code>MemoryCache</code> is slow</li> <li>It comes with bloat (for example, performance counters that you can't disable)</li> <li>It uses string keys only (so you have to allocate a string every time you address an object)</li> <li>Not generic (causes boxing/unboxing for primitive value-types like <code>int</code> or <code>DateTime</code></li> <li>Uses black magic and non-obvious heuristic to evict items under "memory pressure"</li> <li>Non-atomic writes and reads that lead to <a href="https://en.wikipedia.org/wiki/Cache_stampede" rel="nofollow">Cache stampede</a></li> <li>Did I mention <i>it is slow</i>?</li> </ul> <h2>Enter FastCache</h2> <p>What I needed is basically a "concurrent dictionary" with expiring items, that wil be fast, lightweight, atomic and generic.</p> <p>So this weekend I did what every programmer would do, I wrote my own cache <s>with blackjack and hookers</s>. Please welcome: <a href="https://github.com/jitbit/FastCache" rel="nofollow">FastCache</a>. No, honestly I did google for alternatives, but haven't found anything that really fits my needs. After all, in-memory cache efficiency is not something you worry about at the beginning of your software company's journey (which is where most companies are). I guess that explains why there's so little offerings in this space.</p> <h2>Making a fast cache</h2> <p>My two biggest challenges were:</p> <ol> <li>How to work with date/time efficiently when evicting expiring items?</li> <li>How to make writes atomic?</li> </ol> <p></p> <h3 id="perf">"DateTime.Now" is slow</h3> <p>Check this benchmark out. It does nothing but getting "current system time"</p> <pre> | Method | Mean | Error | StdDev | |---------------- |---------:|---------:|---------:| | DateTime_Now | 93.55 ns | 1.516 ns | 0.083 ns | | DateTime_UtcNow | 19.19 ns | 1.479 ns | 0.081 ns | </pre> <p><code>DateTime.UtcNow</code> is a little bit faster because it does not have to account for time zones (another good reason to use UtcNow in your code!). But still not fast enough. If I'm going to check whether a cached item is expired I need something much faster. Check this out:</p> <pre> | Method | Mean | Error | StdDev | |---------------- |----------:|----------:|----------:| | DateTime_Now | 92.530 ns | 3.2421 ns | 0.1777 ns | | DateTime_UtcNow | 19.037 ns | 0.6913 ns | 0.0379 ns | | GetTickCount | 1.751 ns | 0.0245 ns | 0.0013 ns | </pre> <p>Yes. How about that. 1 nanosecond, baby. This is <code>Environment.TickCount</code>. It is limited to <code>int</code> data type though, which is 2.4 billion milliseconds. But hey, if I target .NET 6 there's the <code>TickCount64</code> which is equally fast on 64-bit processors!</p> <h3>"Atomicness"</h3> <p>The second biggest challenge is "atomicness". When it comes to atomicness, the biggest trick is to check "item exists" and "item not expired" conditions <i>in one go</i>.</p> <p>When an item was "found but is expired" - we need to treat this as "not found" and discard the item. For that we either need to use a <code>lock</code> so that the the three steps "exist? expired? remove!" are performed atomically. Otherwise another thread might jump in, and ADD a non-expired item with the same key while we're still evicting the old one. And we'll be removing a non-expired key that was just added.</p> <p>Or - instead of using locks we can remove by key AND by value. So if another thread has just rushed in and shoved another item with the same key - that other item won't be removed.</p> <p>Basically, instead of doing this</p> <pre> lock { exists? expired? remove by key! } </pre> <p>We now do this</p> <pre> exists? (if yes the backing dictionary returns the value atomically) expired? remove by key AND value </pre> <p><a href="https://github.com/jitbit/FastCache/blob/main/FastCache/FastCache.cs#L74" rel="nofollow">Here's how we do it.</a> If another thread chipped in while we were in the middle of checking if it's expired or not, and recorded a new value - we won't remove it.</p> Why <code>lock</code> is bad? <p>Locks suck because they add an extra 50ns to benchmark, so it becomes 110ns instead of 70ns which sucks. So - no locks then!</p> <h2>Overall speed test</h2> <p>Benchmarks under Windows</p> <pre style="white-space:pre"> | Method | Mean | Error | StdDev | Gen0 | Allocated | |--------------------- |----------:|----------:|---------:|-------:|----------:| | FastCacheLookup | 67.15 ns | 2.582 ns | 0.142 ns | - | - | | MemoryCacheLookup | 426.60 ns | 60.162 ns | 3.298 ns | 0.0200 | 128 B | | FastCacheAddRemove | 99.97 ns | 12.040 ns | 0.660 ns | 0.0254 | 160 B | | MemoryCacheAddRemove | 710.70 ns | 32.415 ns | 1.777 ns | 0.0515 | 328 B | </pre> <p>Benchmarks under Linux (Ubuntu, Docker)</p> <pre style="white-space:pre"> | Method | Mean | Error | StdDev | Gen0 | Allocated | |--------------------- |------------:|-----------:|----------:|-------:|----------:| | FastCacheLookup | 94.97 ns | 3.250 ns | 0.178 ns | - | - | | MemoryCacheLookup | 1,051.69 ns | 64.904 ns | 3.558 ns | 0.0191 | 128 B | | FastCacheAddRemove | 148.32 ns | 25.766 ns | 1.412 ns | 0.0253 | 160 B | | MemoryCacheAddRemove | 1,120.75 ns | 767.666 ns | 42.078 ns | 0.0515 | 328 B | </pre> <p>Wow, a 10x performance boost on Linux. I love my job.</p> <p><small>P.S. The title of this post is a hat tip to <a href="https://samsaffron.com/archive/2011/03/30/How+I+learned+to+stop+worrying+and+write+my+own+ORM" rel="nofollow">Sam Saffron</a>.</small></p> Thu, 22 Sep 2022 10:24:13 GMT https://www.jitbit.com/alexblog/310-how-to-hide-tethering-from-your-mobile-operator/ https://www.jitbit.com/alexblog/310-how-to-hide-tethering-from-your-mobile-operator/ How to Hide Tethering from Your Mobile Operator <h3>TLDR:</h3> <ol> <li>Use a secure VPN to prevent DPI</li> <li>On your laptop, change packet TTL to 65 (iOS default 64 plus one).</li> </ol> <!--more--> <p style="text-align:center">&bull;&bull;&bull;</p> <p>On my recent mountain biking trip to France I accidentally booked an Airbnb without WiFi. Bummer. But hey, 5 minutes of googling and I found a perfect eSim provider that offers <i>unlimited</i> data for only €19/week. Who needs slow DSL-based WiFi in the apartment, when you can have 4G <i>everywhere</i>?</p> <p>After placing an order and scanning the QR-code landed in my inbox I was up and running in 30 seconds (gosh I love eSim). SpeedTest showed a strong 65 Mbit/s connection. Perfect. The only problem was - the “personal hotspot” mode didn’t work at all. As it turns out (after reading the small grey text at the bottom of their landing page) the operator does not support data sharing on unlimited plans.</p> <p>Challenge accepted. Let the hacking begin.</p> <blockquote> <h2>How do mobile carriers detect "personal hotspot"?</h2> <p>In short: by deep packet inspection and TCP/IP stack fingerprinting. And sometimes, your iPhone rats on you too.</p> <p>DPI means looking "inside" a network packet and analysing its content. For example, looking at your browser’s “user-agent” header for non-SSL connections. Or examine the traffic destination - if your “iPhone” suddenly sends requests to "Windows Update" servers, well, then it’s probably not exactly an iPhone, huh?</p> </blockquote> <p>All these traces can be hidden by using a secure VPN to encrypt the traffic. I installed the free "1.1.1.1" app from Cloudflare, that hides my packets' content, my DNS requests and the destination IPs. All my traffic now goes to a single VPN server in Oslo, Norway.</p> <p>That didn't work. The laptop still had no internet connectivity, while the phone worked perfectly.</p> <p>Fine. Let’s check the APN settings assigned by the network. Sometimes it instructs the phone to use a different APN address for tethering. Quick check at "Mobile data - Data plan - Mobile data network" - nope, the APN for “personal hotspot” and “mobile data” were the same.</p> <p>What should I try next? After 5 minutes of staring at the ceiling I suddenly remembered how ages ago, during my network admin internship days, my supervisor once tought me this trick. "Hey, did you know you can tell which OS a machine is running by sending a simple PING?" If the response says "TTL 128" - it’s Windows, if it says "TTL 64" - it’s Linux.</p> <p>A-ha! That’s probably how the ISP can see that I'm on Windows.</p> <p>Seems logical, if the mobile operator can't look "inside" the VPN-encrypted packet, their last resort would be the packet’s "meta" data. Which can reveal the operating system's default TCP/IP settings. And "TTL" (time-to-live) is one of the strongest hints of all. So I edited my laptop’s registry settings, set TTL to 64 - to mimic iOS - sat back and prepared to enjoy my high speed Internet.</p> <p>Nope. Darn. It must be something else.</p> <p>OK, but what is TTL - "time-to-live" - exactly? It means how many “hops” a packet can “survive” before being dropped. And every time a packet passes though a router - its TTL is decreased by one. And my iPhone is exactly that - an extra “router” on the packet’s journey. Meaning, once my packets pass though the iPhone hotspot their TTL becomes 63. Bingo. That’s how the provider can tell. My TTL is an odd number.</p> <p>Setting it to 65 will make it 64 once the packet passes the iPhone. So now my packets are indistinguishable from the iOS “native” traffic.</p> <p>Aaaa-nd boom! It worked. I'm proudly writing this post using my laptop. Enjoying some Netflix on the background.</p> <p>Being an IT engineer is like having a cheat code to life.</p> <p><small>P.S. Sometimes your iPhone refuses to turn on personal hotspot in the settings, saying it’s operator-disabled, but you can still force-enable it from the control-center.</small></p> Thu, 14 Jul 2022 14:29:30 GMT https://www.jitbit.com/alexblog/309-improving-c-performance-with-spant/ https://www.jitbit.com/alexblog/309-improving-c-performance-with-spant/ Improving C# Performance with Span<T> <p> Whenever I have some free time on my hands I love making our <a href="https://www.jitbit.com/helpdesk/">helpdesk app</a> faster. The newest C# and .NET Core releases come with so many performance oriented features that I've been waiting to play with, specifically the new datatype called <code>Span&lt;T&gt;</code>. </p> <!--more--> <p>Here's the thing. Every program spends 80% of its CPU cycles working with Strings and Byte Arrays. Guess what, even sending an email over SMTP or parsing an incoming HTTP request - is still working with strings and arrays. But the problem with strings and arrays is that they are "immutable". If you want to slice, trim, expand, combine or otherwise manipulate an array or a string, you always allocate new copies. Every tiny modification - creates a new copy. Which is a huge performance problem - more work for Garbage Collector, more memory usage etc. etc. Say you want to <code>Split()</code> a string by a delimiter - well, you've just allocated N more strings in addition to the original one.</p> <p>This is why the .NET team has come up with the <code>Span&lt;T&gt;</code> datatype. <b>It's basically a "view" into your existing array</b>.</p> <p><img src="https://i.imgur.com/pEsLBUT.png?1"></p> <p>You can manipulate your "array-like" data using spans all you want - trim, slice, split and combine. It all happens on an existing memory range. And once you're done - convert it back to an array (or don't, if your further code is also Span-compatible).</p> <h2>Real word Span optimization example</h2> <p>Our helpdesk app has a built-in "data URL" parser. "Data URLs" are inline HTML images that look like this:</p> <pre>&lt;img src="data:image/png;base64,iVBORw0KGgoAAA ANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4 //8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU 5ErkJggg==" alt="Red dot" /&gt;</pre> <p>Parsing this image boils down to finding a comma, then base64-decoding everything after it into a byte array. Even this simple 2-step operation has <i>huge</i> performance improvements.</p> <p>Originally we were using this code:</p> <pre> public static byte[] ParseDataUrlArraySplit(string imgStr) { return Convert.FromBase64String(imgStr.Split(',')[1]); } </pre> <p>Which obviously sucks as it allocates two new strings, and then parses the second one. Let's rewrite the <code>.Split</code> call into <code>.Substring</code> like this:</p> <pre> public static byte[] ParseDataUrlSubstr(string imgStr) { return Convert.FromBase64String(imgStr.Substring(imgStr.IndexOf(',') + 1)); } </pre> <p>This will probably perform better, but can we use Spans instead?</p> <pre> public static byte[] ParseDataUrlSpan(string imgStr) { var b64span = imgStr .AsSpan() //convert string into a "span" .Slice(imgStr.IndexOf(',') + 1); //slice the span at the comma //prepare resulting buffer that receives the data //in base64 every char encodes 6 bits, so 4 chars = 3 bytes var buffer = new Span&lt;byte&gt;(new byte[((b64span.Length * 3) + 3) / 4]); //call TryFromBase64Chars which accepts Span as input if (Convert.TryFromBase64Chars(b64span, buffer, out int bytesWritten)) return buffer.Slice(0, bytesWritten).ToArray(); else return null; } </pre> <p>The code above slices a string (which can be casted to a <code>ReadOnlySpan&lt;char&gt;</code> since strings are essentially arrays of chars) then allocates a buffer, and uses the new <code>TryFromBase64Chars</code> API that accepts Span as a parameter. Yes, we're still calling a <code>ToArray</code> at the end just because I'm lazy and don't want ot rewrite further code that still expects a byte array.</p> <p>The results are mind blowing:</p> <pre> BenchmarkDotNet=v0.13.1, OS=Windows 10.0.19044.1586 (21H2) Intel Core i7-9700K CPU 3.60GHz (Coffee Lake), 1 CPU, 8 logical and 8 physical cores .NET SDK=6.0.201 [Host] : .NET 6.0.3 (6.0.322.12309), X64 RyuJIT ShortRun : .NET 6.0.3 (6.0.322.12309), X64 RyuJIT Job=ShortRun IterationCount=3 LaunchCount=1 WarmupCount=3 | Method | Mean | Error | StdDev | Gen 0 | Gen 1 | Allocated | |------------------ |---------:|----------:|----------:|-------:|-------:|----------:| | TestUrlArraySplit | 6.153 us | 0.4384 us | 0.0240 us | 1.8158 | 0.0687 | 11 KB | <span style="color:#0b0">| TestUrlSpan | 2.730 us | 0.2081 us | 0.0114 us | 0.9842 | 0.0191 | 6 KB |</span> | TestUrlSubstr | 5.717 us | 0.3150 us | 0.0173 us | 1.7929 | 0.0153 | 11 KB | </pre> <p>The span-variant is 3X faster than our original Split-based code and uses 2X less memory!</p> <p>Considering we use this code <i>a lot</i> in our app (whenever a helpdesk end-user pastes an image into a support ticket or a reply), this optimization will add up into a huge performance boost.</p> Wed, 23 Mar 2022 23:20:08 GMT https://www.jitbit.com/alexblog/308-sql-is-the-most-long-lasting-skill-in-tech/ https://www.jitbit.com/alexblog/308-sql-is-the-most-long-lasting-skill-in-tech/ SQL is the most long lasting skill in tech <p>In January 2020, right before COVID hit, a <a href="https://news.ycombinator.com/item?id=21961214" rel="nofollow">question</a> has popped up at the HackerNews front page:</p> <blockquote>"Which technology is worth learning in 2020?"</blockquote><!--more--> <p>And the most "upvoted" answer was:</p> <blockquote><b>Learn how to really use a relational database, relational data modeling, and SQL</b></blockquote> <p>Well, finally, someone said it. Thank you.</p> <p>Hold on, I get it. There's so much "fancy" stuff out there: K8S, Redis clusters, <a href="https://www.youtube.com/watch?v=b2F-DItXtZs" rel="nofollow">"web scale"</a> NoSQL engines and other rock-star agile next-gen hotness. But relational databases (first mentioned way back in the 1970 scientific paper by Edgar Codd) are here to stay. Moreover, I think SQL will even see some sort of comeback in the near years.</p> <p>I have to confess, I love boring "unsexy" technology in general. You can make it do magical things. For example, at the time of this writing our SaaS app sends almost 30,000 (30 thousand) emails every hour. This is 8-10 emails per second. And we don't use cloud platforms like Mailgun or SES or - what else there is, Sendgrid? - nope. We use a boring Postfix service running on a Ubuntu server that costs us a penny, I blogged about this <a href="https://www.jitbit.com/news/email-architecture/">here</a>.</p> <p>Back to SQL: a couple of years ago we've exceeded a thousand customers just for the SaaS version of our help desk product. This means that a thousand companies are flocking into our software - all at the same time. Every company has dozens, sometimes hundreds of "help desk agents" and thousands of end users. This translates into hundreds of thousands of "DAU" - daily active users.</p> <p>There are 50 million tickets in the database and each ticket has a thread of 10-20 messages attached to it, resulting in half a billion entries just in the "messages" table alone. This scale makes me lose my sleep, but all this data (including the full-text search) lives in a single SQL database, which is spinning on an average quad-core machine, that is way less powerful than my gaming laptop.</p> <p>Thank you SQL. With index tuning, proper partitioning, some "execution plan" analysis and caching - you help us build wonderful things reliably.</p> <p><b>"SQL seems to be the most long-lasting skill in the IT industry"</b></p> Sat, 15 Jan 2022 20:30:10 GMT https://www.jitbit.com/alexblog/307-cross-post-migrating-a-1tb-database-from-win-to-linux-with-no-downtime/ https://www.jitbit.com/alexblog/307-cross-post-migrating-a-1tb-database-from-win-to-linux-with-no-downtime/ Cross-Post: Migrating a 1TB database from Win to Linux with no downtime <p>For those of you who don't follow our company blog, we've just published another "tech porn" story on migrating a huge database from Linux to Windows with no downtime. <a href="https://www.jitbit.com/news/5366-how-we-migrated-a-1tb-database-from-win-to-linux-with-no-downtime/">Check it out</a>.</p><!--more--> <p>TL;DR: we spawned a Linux version of SQL Server and moved the database using "log-shipping". We've been planning this move for months and this Christmas weekend we finally did it. <a href="https://www.jitbit.com/news/5366-how-we-migrated-a-1tb-database-from-win-to-linux-with-no-downtime/">Read on</a></p> Fri, 31 Dec 2021 12:07:49 GMT https://www.jitbit.com/alexblog/302-speeding-up-a-huge-multi-tenant-saas-database/ https://www.jitbit.com/alexblog/302-speeding-up-a-huge-multi-tenant-saas-database/ Speeding up a huge multi-tenant SaaS database <p>Here's a little story of how we sped up our SaaS backend with a one-liner magic silver bullet</p><!--more--> <h2>The problem</h2> <p>Our <a href="https://www.jitbit.com/saas-helpdesk/">SaaS</a> is powered by a huge multi terabyte "multi tenant" relational database cluster. Some tables are more than 200 GB - this is crazy, to be honest. And for the multi-tenant architecture we use the dumbest possible model - "Pool" - this is when a <code>"tenant_id"</code> column is added to all database tables. The model is not the most effective, but super easy to implement, backup and maintain.</p> <blockquote>by the way, AWS has published a very cool <a href="https://d1.awsstatic.com/whitepapers/Multi_Tenant_SaaS_Storage_Strategies.pdf" rel="nofollow">doc</a> about designing multi-tenant SaaS systems, that goes through all the options, a must-read for every CTO</blockquote> <p>Everything was slow and sluggish. Clients were furious, servers were overheating. Things like "get a record by ID" worked just fine, but getting any <i>list</i>, like, "unread messages for today" in a multi-terabyte database becomes a nightmare. Even with all the correct indexes. What's even worse, a fat client with a bazillion records slows down smaller customers with less data. Something needed to be done.</p> <h2>The Great Revelation</h2> <p>...And that's when we had the Great Revelation [sarcasm], which sooner or later comes to any DBA - most of the work uses the "tip" of the data. And the huge "long tail" archive just lies around <s>as useless dead weight</s> and used for reports only.</p> <p>Our first thought was to set up "vertical" partitioning. Push the "old" data somewhere beyond the horizon (onto a separate disk or even another server), but keep the "recent" data somewhere close.</p> <p>Yes, but not really.</p> <p>Partitioning is hard. It turned out difficult to set up, PITA to maintain and it never works the first time. To paraphrase the well-known yachting proverb: the two happiest days in a DBA life are the day he sets up partitioning and the day he gets rid of it. Because the server still performs cross-partition scans, and those cases are pretty hard to investigate.</p> <p>I can hear the audience shouting: "sharding!", "ClickHouse!", "separate OLTP from DWH!"... And similar overengineering stuff. Sorry, but no. We have a self-hosted version that should install itself with one click, even for non tech savvy customers. The less moving parts the better.</p> <p>I wanted a simple hack that would solve all problems.</p> <h2>Meet Filtered Indexes</h2> <p>That's when I accidentally remembered the awesome cheat code - "filtered indexes".</p> <p>Here's the thing: a database index is always built over the entire table by default. But what if we could index just the 0.1% of the data?</p> <p>See, inside any CRUD application code - in the business logic - there is always a condition that distinguishes "old" data from the "new". Something like "project status = finished". Or "order status = processed", etc. And this condition is already present in most of your <code>SELECT</code>s. In our case, it was "ticket status = closed".</p> <p>What does a junior DBA engineer do? They create an index over this column. So that the search for "unclosed tickets" or "unprocessed messages" is fast and cool.</p> <pre>CREATE INDEX myIndex ON messages (processed)</pre> <p>What does a senior DBA do? Creates a "filtered index" with this condition:</p> <pre>CREATE INDEX myIndex ON messages (column1, column2 ...) WHERE processed = 0 --like this</pre> <p>And then makes sure that this condition is in all your <code>WHERE</code> queries.</p> <p>As a result, even in a huge multi-terabyte database we now have a small fast index of only tens of megabytes (!), which always points to the most recent data. As soon as the data ceases to satisfy the condition, it just flies away from the index. Magically.</p> <p>When we built our first filtered index and started looking at the usage statistics, our jaws were on the floor - SQL Server dropped what is was doing and started eating the index like crazy. The application has accelerated significantly, the CPU load has dropped by 80%. Just look at the graph - this is before and after implementing just ONE <i>test</i> index.</p> <p><a href="https://i.imgur.com/mFuzwWT.png" rel="nofollow"><img src="https://i.imgur.com/mFuzwWT.png"></a></p> <p>Our database server is only 4 CPU cores and 32 GB of memory, but it easily pulls off a several terabyte database with hundreds of thousands of DAU. We now have an unspoken challenge in our team - for how long we can keep using this hardware without upgrades? So far we have been stretching for years &#128521;</p> <p>I guess the point is - before searching for bulky overengeneering solutions look at what the old, boring and unsexy RDBMS have to offer: they can do a lot of cool things even on outdated hardware.</p> <p>P.S. "Filtered / partial indexed" are available in SQL Server (2008 and beyond), Postgres (7.2 and beyond), Mongo and even SQLite. MySQL is out, but there's a <a href="https://stackoverflow.com/a/56919750/56621" rel="nofollow">workaround</a>.</p> <p>P.P.S. there is a catch, however. When creating a filtered index, be sure to list the filtered column in the "include" directive. This way we force the database server to maintain "statistics" over the column. Without the statistics the index will not work, the server will just not use it.</p> <pre>CREATE INDEX myIndex ON Messages (Column1, Column2 ...) INCLUDE (Processed) -- important WHERE Processed = 0</pre> <p><a href="https://www.brentozar.com/archive/2015/12/filtered-indexes-just-add-includes/" rel="nofollow">Brent Ozar</a> has more to say on this</p> Tue, 29 Jun 2021 13:01:36 GMT https://www.jitbit.com/alexblog/301-reasons-not-to-upgrade-aspnet-to-aspnet-core-but-you-will-have-to-anyway/ https://www.jitbit.com/alexblog/301-reasons-not-to-upgrade-aspnet-to-aspnet-core-but-you-will-have-to-anyway/ Reasons NOT to upgrade ASP.NET to ASP.NET Core (but you will have to anyway) <p>I know, I know, <a href="https://devblogs.microsoft.com/dotnet/net-core-is-the-future-of-net/" rel="nofollow">.NET Core is the future of .NET</a>, and "cross-platform blah-blah", and "high-performance and scalable blah-blah", and also "microservices!!! containers!!!" etc. Even more - I understand that's it's inevitable. But still. Consider this an angry post on what's wrong with upgrading from ASP.NET Framework to ASP.NET Core:</p><!--more--> <h2>It's not really an upgrade</h2> <p>It's a rewrite. And a tough one. I've lived through many rewrites in my 20-years career in software engineering (Angular -> Vue, Cordova -> Ionic, ASP -> ASP.NET, even PHP -> Python -> node, just to name a few), and this one is the worst, because it gives that false feeling of an "upgrade". Many things will have to be rewritten, and many entirely <i>new</i> things have to be written from scratch because they're just... gone.</p> <h2>No hot-swap update</h2> <p>Minor annoyance, but you can't just overwrite your ASP.NET Core app with new files and expect it to pick 'em up. The files are locked. You have to come up with hacks, like renaming the main dll-file then overwriting, then recycling the app - or point your "website" to a new updated directory... And even then you will have minor downtime.</p> <p>"But that's not how you do it" - I know, I know, blue-green deployments, Docker and shit... I don't care. What I do care about - are my self-hosted customers who now have a new headache that wasn't there before. Not all of my customers are experienced system administrators.</p> <h2>No recycles</h2> <p>You can't even "recycle" an ASP.NET Core app without losing some requests and throwing 503-errors. There's an active <a href="https://github.com/dotnet/aspnetcore/issues/10117" rel="nofollow">issue</a> at GitHub that stays unfixed for 2 years. I don't mean recycling for deployment purposes - just... recycles. I simply want to be able to restart an app without users getting errors</p> <h2>Not included in Windows Update</h2> <p>.NET Core runtime is not automatically updated in Windows. This adds more work to system administrators who live in the windows world - set up WSUS or use "Microsoft Update" (not really the same as "Windows Update", confused? Me too). And we're back to my point about the self-hosted customers - they have more work to do now.</p> <h2>Over-engineered (subjective)</h2> <p>This one is totally subjective, but as a Microsoft-Certified-you-name-it dare I say .NET Core is over engineered. Too many computer scientists making unnecessary abstractions and wrapping stuff in factories and dependency injections over a simple "Hello World" app. The <a href="https://stackoverflow.com/questions/39083372/how-to-read-connection-string-in-net-core" rel="nofollow">How to get DB connection string in NET Core"</a> question at StackoverFlow is 15 screens long.</p> <h2>Too frequent changes</h2> <p>Having an issue with .NET Core? Googled up a blog post that describes a solution? Great. Now check the blog post date! If it's from 2-3 years ago - chances are it does not work any more. "Sorry, that's not how it works in version X.0X". This one annoys me the most.</p> <h2>Microsoft doesn't get open source</h2> <p>Microsoft tries its best to suck up to the tech community and be "open-source friendly".</p> <p>But it seems like Microsoft took everything that's <i>bad</i> about open source, and left out everything that's <i>good</i> about open source. By "bad" I mean lack of documentation (just read the sources &trade;), versioning hell, breaking changes etc. And by "good" I mean actually encouraging and accepting contributions.</p> <p>I fixed a bug in SignalR (Microsoft's websocket library) and it took them <i>21 months</i> to <a href="https://github.com/SignalR/SignalR/pull/4387" rel="nofollow">merge the fix</a> and they still haven't released any updates. Yet, it took them only a couple of weeks to make VSCode work on Apple M1 Silicon. That's not "open source friendly", that's PR-driven development.</p> <h2>...but you will still have to</h2> <p>Well, .NET Core is still fast as ef. Even faster than C++ and Go on some tasks. We're in the process of porting our flagship product - <a href="https://www.jitbit.com/helpdesk/">Jitbit Helpdesk</a> - from ASP.NET MVC to ASP.NET Core, so I keep benchmarking stuff all the time, and it's real - same code runs 1.5X faster.</p> <p>And while .NET Framework stays here and even keeps getting updates as part of Windows OS, make no mistake, it's frozen now. I guess we'll have to suck it up and port.</p> Thu, 22 Apr 2021 15:50:07 GMT https://www.jitbit.com/alexblog/300-systemdrawing-vs-skiasharp-benchmark/ https://www.jitbit.com/alexblog/300-systemdrawing-vs-skiasharp-benchmark/ System.Drawing vs SkiaSharp benchmark <p> This is a ridiculously short post for .NET devs looking to compare <code>System.Drawing</code> with <code>SkiaSharp</code> (the fastest open-source alternative). </p><!--more--> <p> I'm currently building a <a href="https://www.jitbit.com/imgen/">new pet project</a> - a free tool, that will allow bloggers and webmasters generate open-graph "cover images" via sending GET requests to a simple API and then hot-linking images right into their pages. That is why I was in need of a fast, simple and scalable image processing library for .NET: </p> <p style="text-align:center;"> <a href="https://i.imgur.com/LWaESLr.jpg" rel="nofollow"><img src="https://i.imgur.com/LWaESLr.jpg"></a> </p> <p> For those unaware, <code>System.Drawing</code> is an image manipulation and generation tool that is part of .NET Framework. But since it depends on Windows so much - it was not included in .NET Core. That is why developers mostly use ImageSharp and SkiaSharp (SkiaSharp is faster but comes with a C++ library, ImageSharp is slower, but it's 100% managed code).</p> <p>Now that <a href="https://www.nuget.org/packages/System.Drawing.Common/" rel="nofollow">System.Drawing.Common</a> is finally available for .NET Core and is even cross-platform, it's time to give it a try. </p> <p> System.Drawing gets a lot of hate. There are dozens of posts why you shouldn't be using it: high CPU load, concurrency issues, slow performance etc. I decided to test that last one and benchmark it against the fastest alternative - SkiaSharp. It is based on Google Skia, a portable image manipulation API from the big G. The code generates a 120x80 thumbnail from a 500kb picture. Here are the results: </p> <h3>.NET Framework</h3> <pre> // * Summary * BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042 Intel Core i7-8650U CPU 1.90GHz (Kaby Lake R), 1 CPU, 8 logical and 4 physical cores [Host] : .NET Framework 4.8 (4.8.4341.0), X86 LegacyJIT Job-YOWEFT : .NET Framework 4.8 (4.8.4341.0), X86 LegacyJIT IterationCount=10 LaunchCount=1 WarmupCount=1 | Method | Mean | Error | StdDev | |----------------------------- |---------:|----------:|----------:| | CreateThumbnailSystemDrawing | 6.733 ms | 0.3119 ms | 0.1856 ms | | CreateThumbnailSkiaSharp | 7.421 ms | 0.1057 ms | 0.0629 ms | </pre> <p>As you can see System.Drawing is faster, at least on Windows. I tried it with different images, played around with different settings to make SkiaSharp faster (with or without anti-aliasing, different interpolation techniques etc.) and the results were consistent: System.Drawing was always better.</p> <p>But may be it's just .NET Framework using some unfair tricks? Let's test on .NET Core</p> <h3>.NET Core 5.0</h3> <pre> // * Summary * BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042 Intel Core i7-8650U CPU 1.90GHz (Kaby Lake R), 1 CPU, 8 logical and 4 physical cores .NET Core SDK=5.0.201 [Host] : .NET Core 5.0.4 (CoreCLR 5.0.421.11614, CoreFX 5.0.421.11614), X64 RyuJIT Job-IFFNJZ : .NET Core 5.0.4 (CoreCLR 5.0.421.11614, CoreFX 5.0.421.11614), X64 RyuJIT IterationCount=10 LaunchCount=1 WarmupCount=1 | Method | Mean | Error | StdDev | |----------------------------- |---------:|----------:|----------:| | CreateThumbnailSystemDrawing | 5.458 ms | 0.2266 ms | 0.1499 ms | | CreateThumbnailSkiaSharp | 7.325 ms | 0.9527 ms | 0.6302 ms | </pre> <p>Wow, that's even a bigger difference. Almost 1.5x faster than SkiaSharp. The old dog is still kicking.</p> Thu, 15 Apr 2021 10:56:12 GMT