mastodon.uno è uno dei tanti server Mastodon indipendenti che puoi usare per partecipare al fediverso.
Mastodon.Uno è la principale comunità mastodon italiana. Con 77.000 iscritti è il più grande nodo Mastodon italiano: anima ambientalista a supporto della privacy e del mondo Open Source.

Statistiche del server:

6,7K
utenti attivi

#cache

1 post1 partecipante0 post oggi
Discussione continua

@cryptomator I added an improvement to store #encryption keys in a CryptProtectMemory protected #cache.

The CryptProtectMemory function encrypts memory to prevent others from viewing sensitive information in your process.

The related #PR was submitted.

github.com/cryptomator/integra

I am glad to request this change to finalize the Windows Hello integration.
Based on an idea from @infeo

"to implement caching only in the integrations-win, as natively as possible. General: ...
GitHubAdd a secure cache to Windows Hello to make it usable (amount of prompts) by purejava · Pull Request #105 · cryptomator/integrations-winDi purejava

Ptt 的 cache.ptt.cc 看起來已經退役了?

在 Telegram 上看到我之前的文章「重寫 Ptt 上的 Imgur Userscript 解決圖片出不來的問題」才發現的,所以找了一篇有圖的測試 (要找有 Imgur 的):「[公告] 第三屆最婆大會 頒獎儀式」。 從 HTML 可以看到 i.imgur.com 的部分已經只剩下連結了: i.imgur.com/nRKDMxM.png 另外從 crt.sh | cache.ptt.cc 這邊也可以看到 cache.ptt.cc 大約是 60 天會自己 renew 一次,應該要在 2025/01/25 左右的時候 renew 卻沒有做,所以在…

blog.gslin.org/archives/2025/0

#cache#image#photo

🐘 Mastodon Account Archives 🐘

TL;DR Sometimes mastodon account backup archive downloads fail to download via browser, but will do so via fetch with some flags in the terminal. YMMV.

the following are notes from recent efforts to get around browser errors while downloading an account archive link.

yes, surely most will not encounter this issue, and that's fine. there's no need to add a "works fine for me", so this does not apply to your situation, and that's fine too. however, if one does encounter browser errors (there were several unique ones and I don't feel like finding them in the logs).

moving on... some experimentation with discarding the majority of the URL's dynamic parameters, I have it working on the cli as follows:

» \fetch -4 -A -a -F -R -r --buffer-size=512384 --no-tlsv1 -v ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256

the primary download URL (everything before the query initiator "?" has been substituted as ${URL_PRE_QMARK}, and then I only included Amazon's algo params, the rest of the URL (especially including the "expire" tag) seems to be unnecessary.

IIRC the reasoning there is about the CDN's method for defaulting to a computationally inexpensive front-line cache management, where the expire aspects are embedded in the URL instead of internal (to the CDN clusters) metrics lookups for cache expiration.

shorter version: dropping all of the params except the hash algo will initiate a fresh zero-cached hit at the edge, though likely that has been cached on second/non-edge layer due to my incessent requests after giving up on the browser downloads.

increasing the buffer size and forcing ipv4 are helpful for some manner of firewall rules that are on my router side, which may or may not be of benefit to others.

- Archive directory aspect of URL: https://${SERVER}/${MASTO_DIR}/backups/dumps/${TRIPLE_LAYER_SUBDIRS}/original/
- Archive filename: archive-${FILE_DATE}-{SHA384_HASH}.zip

Command:

» \fetch -4 -A -a -F -R -r --buffer-size=512384 --no-tlsv1 -v ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256

Verbose output:

resolving server address: ${SERVER}:443
SSL options: 86004850
Peer verification enabled
Using OpenSSL default CA cert file and path
Verify hostname
TLSv1.3 connection established using TLS_AES_256_GCM_SHA384
Certificate subject: /CN=${SEVER}
Certificate issuer: /C=US/O=Let's Encrypt/CN=E5
requesting ${URL_PRE_QMARK}?X-Amz-Algorithm=AWS4-HMAC-SHA256
remote size / mtime: ${FILE_SIZE} / 1742465117
archive-${FILE_DATE}-{SHA384_HASH}.zip 96 MB 2518 kBps 40s

@stefano looks to be working now :)

Ech kurde. Właśnie odkryłem, że plugin od cache psuje mi wyświetlanie map na blogu.

Gdy jestem zalogowany widzę na mapie wszystkie markery POI, profil wysokości trasy i mam możliwość pobrania pliku gpx, ale bez logowania jest tylko mapa z narysowaną trasą.

Znowu trzeba będzie dłubać, albo wyłączyć keszowanie całkiem, bo i tak nie ratuje bloga przed FediDDoS-em, a cała reszta ruchu jest znikoma.

🎤 Drupal Developer Days Leuven 2025: Speaker Spotlight Series 🎤

Join Kristiaan Van den Eynde at #DrupalDevDays this April to understand the common caching mistakes.

💡 This session aims to inform developers about possible pitfalls, helping them avoid making common mistakes and providing some information along the way as to why these mistakes are so common and how they can mess with your site.

🎟️ Register now to secure your spot: drupalcamp.be/en/drupal-dev-da

#DDD25#Drupal#Cache

🎤 Drupal Developer Days Leuven 2025: Speaker Spotlight Series 🎤

👉 Most people who run a decent-sized Drupal website have probably heard of Varnish.
👉 It comes with some VCL code that might seem confusing at first.
👉 There are some Drupal modules you need to install and configure to invalidate the cache. But how does it work?

Join Thijs Feryn this #DrupalDevDays to learn about Varnish and its features.

🎟️drupalcamp.be/en/drupal-dev-da

#DDD25#Drupal#Varnish

👑 Cache is King: Smart Page Eviction with eBPF

arxiv.org/abs/2502.02750

arXiv.orgCache is King: Smart Page Eviction with eBPFThe page cache is a central part of an OS. It reduces repeated accesses to storage by deciding which pages to retain in memory. As a result, the page cache has a significant impact on the performance of many applications. However, its one-size-fits-all eviction policy performs poorly in many workloads. While the systems community has experimented with a plethora of new and adaptive eviction policies in non-OS settings (e.g., key-value stores, CDNs), it is very difficult to implement such policies in the page cache, due to the complexity of modifying kernel code. To address these shortcomings, we design a novel eBPF-based framework for the Linux page cache, called $\texttt{cachebpf}$, that allows developers to customize the page cache without modifying the kernel. $\texttt{cachebpf}$ enables applications to customize the page cache policy for their specific needs, while also ensuring that different applications' policies do not interfere with each other and preserving the page cache's ability to share memory across different processes. We demonstrate the flexibility of $\texttt{cachebpf}$'s interface by using it to implement several eviction policies. Our evaluation shows that it is indeed beneficial for applications to customize the page cache to match their workloads' unique properties, and that they can achieve up to 70% higher throughput and 58% lower tail latency.
#linux#kernel#cache
Ha risposto nella discussione

@wyatt I think part of it might be that newer processors provide instructions to help block one process from reading the memory of another through speculation, branch prediction, and cache behavior. You could block cross-process "snooPING AS usual" on an older processor by invalidating cache on every context switch, but then you'd lose the constraint that you called "performant" (a word I'm having trouble accepting as valid).