A forthright glimpse at tellurian art
Html/Javascript widget
Saturday, 14 March 2026
Ein Hinblick in der Zukunft
Mit ihrer Absegnung, ich darf mich einen wenigen Dampf ablassen. Ich werde viel zu viel erledigen, währendessen die derzeitige Umstände mich nicht zurückblicken. Es hat sich nachgewiesen, dass Erfolg und Leistungen sind vorübergehend. Nur Anstrengung. Meine Fähigkeiten gelten allgemeinen nur für die nächste Herausforderung. Und dann kommt es wieder zurück. Meine Bemühung muss sich mit dem nächsten Ziel befassen. Nachgewiesen is dass Erfolgreiche Händlungen gibt es nicht; man geht immer noch zu seiner eigenen Angelegenheiten zurück.
Sunday, 10 August 2025
CUDA cluster
Einige Funktionen der Maschine des Institute for Computational and Mathematical Engineering (ICME) der Stanford University.
Controller
· 2 x Intel® Xeon® X5650 2.66GHz 12MB Cache Hexa-core Processor
· 48GB DDR3 1333 REG ECC 12 x 4GB Sticks
· 8 x Hitachi 3TB Ultrastar 7K3000 7200 RPM 64MB Cache SATA
...
Nodes (currently 13 nodes running)
· 2 x Intel Xeon DP E5645 2.40GHz 12MB Cache Hexa-core Processor's.
· 48GB DDR3 1333 REG ECC Memory 12 x 4GB Sticks
· 1 x 1TB Seagate SATA3 6Gb/s 7200RPM 64MB Cache 2.5 Inch Disk Drive
· 1 x ConnectX-2® InfiniBand adapter card, single-port, QDR 40Gb/s,Gen2
· Seven NVIDIA FERMI-BASED C2070 GPU’s per node each with 448 CUDA cores, 6 GB memory
Die Architektur aus Controller und 13 Verarbeitungsknoten entspricht der eines Clusters, da es sich bei den Komponenten offensichtlich um marktübliche Komponenten handelt.
CUDA
Cuda ist eine parallele Computertechnologie, die die Verwendung von Hunderten oder Tausenden von Kernen (CUDAs) auf einem einzigen Prozessor ermöglicht.
Beim Programmieren mit CUDA geht es um das Schreiben von Kerneln, also Funktionen, die parallel von mehreren Threads auf einer GPU ausgeführt werden.
In Cuda programming, a function declared with __global__ is executed on the device (GPU).It can only be called from the host (CPU) code. This means your main C/C++ application running on the CPU launches these kernel functions to perform computations on the GPU.
CUDA und MPI sind beides Programmiersprachen für die parallele Programmierung, aber CUDA ist spezifisch für die Programmierung auf GPUs, während MPI für die Interprozesskommunikation in verteilten Systemen verwendet wird.
Grafikkarten, die die GPGPU-Technologie (z. B. CUDA, OpenCL) unterstützen, bieten nicht nur hochwertige visuelle Effekte in Spielen und Multimedia-Anwendungen, sondern ermöglichen auch erhebliche Leistungssteigerungen in Anwendungen, die eine große Anzahl einfacher und sich wiederholender Berechnungen durchführen.
Saturday, 12 July 2025
IBM M Technology
IBM M Technology, also known as M technology or z/Architecture or Mainframe Technology, is ibm's mainframe computing platform. Cisc-based, it can handle massive amounts of workload and is used for critical applications such as banking systems. Unlike risc unix and ibm's aix, which use a risc-based processor, m technology makes use exclusisvely of a cisc-based processor, tailored for resilience and high throughput
Swagger
Swagger is a suite of tools for API developers from SmartBear Software and a former specification upon which the OpenAPI Specification is based for designing, building, documenting, and consuming RESTful APIs.
The specification itself is now called OpenAPI Specification (OAS).
Swagger's open-source tooling usage can be broken up into different use cases: development, interaction with APIs, and documentation.
Swagger is specifically for REST APIs, not for SOAP or GraphQL.
It maps HTTP methods (GET, POST, PUT, DELETE) to API resources.
https://en.wikipedia.org/wiki/Swagger_(software)
Saturday, 14 September 2024
deadlock
A deadlock is a situation in which the alternatives to solve a dilemma are blocked by each other, causing a stalemate. In the field of computer sciences, a deadlock means an impasse where multiple processes are blocked as they keep indefinitely waiting for each other. More specifically, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process.
related issues include:
phantom reads-
A phantom read occurs when a transaction retrieves a set of rows twice and new rows are inserted into or removed from that set by another transaction that is committed in between. Its often an issue of isolation in the DBMS. What mostly differentiates phantom reads from nonrepeatable reads and sirty reads is that it involves insertion and deleting operations on rows being read between commits. The new rows are referred to as "phantoms"
Nonrepeatable Reads-
A nonrepeatable read occurs when a transaction reads the same row twice but gets different data each time. Keyword here to differentiate from phantom and dirty reads is SAME ROW.
Lost update -
Writing access from parallel transactions modify the same row. The changes of the first transaction are overridden by the second.
dirty read - A Dirty Read in SQL occurs when a transaction reads data that has been modified by another transaction, but not yet committed. It means to read uncommitted changes. It's closely related to temporary updates. Temporary updates describe the state of the data, while dirty reads describe the action of accessing that data by another transaction. Dirty reads are a direct consequence of transactions reading temporary (uncommitted) updates from other transactions.
Incorrect summary: a transaction performs aggregate functions (like SUM, COUNT, etc.) over a dataset. In the meantime, other transactions are inserting, updating or deleting rows in that dataset, leading to an inaccurate summary. When you come across a problem with incosistent wage reports, chances are it's an incorrect summary issue.
Techniques to handle said issues:
shared lock - allows multiple transactions to read a resource without writing to it.
exclusive lock - is granted to a transaction when it wants to write or modify a resource.
2-phase commit - is a protocol used in distributed databases to ensure all nodes in a distributed system either commit or abort a transaction consistently, even in the presence of failures.
It has two phases:
1-vPrepare Phase: The coordinator asks all participating nodes if they can commit the transaction. If all nodes agree, they respond with a "yes." If any node cannot commit, they respond with a "no."
2- Commit Phase: If all nodes agree to commit, the coordinator instructs them to do so. If any node cannot commit, the coordinator instructs all nodes to roll back the transaction.
3-phase commit - same as the 2-phase commit with an extra stage (pre-commit) added in between
Wednesday, 31 July 2024
Zeit, ernsthaft zu werden und sich anzustrengen
Ich habe es lange genug aufgeschoben. Jetzt ist es an der Zeit, die Gedanken, die so lange in meinem Kopf geblieben sind, zu leben. Die Gedanken eines gelebten Lebens verwirklichen und der Mann zu sein, den ich sein möchte und bewundere. Die Fantasien eines disziplinierten und gut trainierten Geistes und Körpers ausleben, mit nur einem Ziel vor Augen: gewinnen. Gewinnen, indem ich das Richtige tue und den Prozess dabei geniesse.
Subscribe to:
Comments (Atom)