sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

588
active users

#clang

3 posts3 participants0 posts today

I like to think of Arch Linux in the same way the C programming language gives you the freedom to royally screw things up if you don't know what you're doing, and NixOS in the same way that the Rust programming language provides safety to prevent you from easily making stupid mistakes.

#Meme#Linux#ArchLinux

What do OpenBSD people use for static analysis of C code?

Valgrind was available for a while but it seems to have been removed now

Apparently there is something called the "Clang Static Analyzer", but I'm not sure how to actually use it (it seems you have to compile your program with specific Clang flags?)

I know there are countless other bugs in my code, but its nice to see "no leaks detected" when I do some stupid shit with void pointers :)

#openbsd#c#gcc

Now that I finished the #Clang backend of the topic (a CPU/GPU , through #openMP, accelerated library for managing bitsets) for the summer conference community talk (papercall.io/perlcommunity) in 15 days, I find that I will probably have no time to complete the #Perl front-end before the deadline.
Lessons learned :

1. Reference counting kicks ass (it's the default way to manage memory mappings for the GPU with #openMP), especially if one can control when the memory is released back to the pool

www.papercall.ioPaperCall.io - Perl Community Conference, Summer 2025

I need help. First the question: On #FreeBSD, with all ports built with #LibreSSL, can I somehow use the #clang #thread #sanitizer on a binary actually using LibreSSL and get sane output?

What I now observe debugging #swad:

- A version built with #OpenSSL (from base) doesn't crash. At least I tried very hard, really stressing it with #jmeter, to no avail. Built with LibreSSL, it does crash.
- Less relevant: the OpenSSL version also performs slightly better, but needs almost twice the RAM
- The thread sanitizer finds nothing to complain when built with OpenSSL
- It complains a lot with LibreSSL, but the reports look "fishy", e.g. it seems to intercept some OpenSSL API functions (like SHA384_Final)
- It even complains when running with a single-thread event loop.
- I use a single SSL_CTX per listening socket, creating SSL objects from it per connection ... also with multithreading; according to a few sources, this should be supported and safe.
- I can't imagine doing that on a *single* thread could break with LibreSSL, I mean, this would make SSL_CTX pretty much pointless
- I *could* imagine sharing the SSL_CTX with multiple threads to create their SSL objects from *might* not be safe with LibreSSL, but no idea how to verify as long as the thread sanitizer gives me "delusional" output 😳

c/c++ devs of fediverse, what does your debugging workflow look like? I've used gdb manually a bit but it's quite laborious to set up each session. I need to be able to do step-through debugging with variable inspection.

[VS]Code and studio are very good for step through debugging once they're set up, but I'd rather avoid them altogether if possible, especially since you have to jump through a series of flaming hoops to get c debugging working in the non-telemetry open source version of code.

Any/all suggestions appreciated, other than 'use rust' #programming #clang #cpp

Part1: #dailyreport #cuda #nvidia #gentoo #llvm #clang
#programming #gcc #c++ #linux #toolchain #pytorch

I am compiling PyTorch with CUDA and CUDNN. PyTorch is
mainly a Python library with main part of Caffe2 C++
library.

Main dependency of Caffe2 with CUDA support is
NVIDIA "cutlass" library (collection of CUDA C++
template abstractions). This library have "CUDA code"
that may be compiled with nvcc NVIDIA CUDA compiler,
distributed with nvidia-cuda-toolkit, or with LLMV
Clang++ compiler. But llvm support CUDA only up to 12.1
version, but may be used to compile CUDA for sm_52
architecture. Looks like kneeling before NVIDIA. :)

Before installing dev-libs/cutlass you should do:
export CUDAARCHS=75

I sucessfully compiled cutlass, now I am trying to
compile PyTorch CUDA code with Clang++ compiler.