Muss demnächst mal schauen was die kann…
Nexus Software | Coffing DW
Kategorie: Dev
Why Are There So Many Misconceptions Around Agile?
Agile is not a set of practices or rules.
Andy Hunt
Why Are There So Many Misconceptions Around Agile? (toolshed.com)
JDBCParquetWriter is a Java Library for writing Apache Parquet Files from JDBC Tables or ResultSets
Old but GOLD! User Interface Principles…
DBT (Data Build Tool)
DBT (Data Build Tool) ist ein Kommandozeilen-Tool, das Software-Entwicklern und Datenanalysten hilft, ihre Datenverarbeitungs-Workflows effizienter zu gestalten. Es wird verwendet, um Daten-Transformationen zu definieren, zu testen und zu orchestrieren, die in modernen Data-Warehouses ausgeführt werden. Hier sind einige Schlüsselaspekte, die erklären, welchen Zweck DBT im Daten-Umfeld hat:
- Transformationen als Code: DBT ermöglicht es Ihnen, Daten-Transformationen als Code zu schreiben, zu versionieren und zu verwalten. Dies fördert die Best Practices der Softwareentwicklung wie Code-Reviews, Versionskontrolle und Continuous Integration/Continuous Deployment (CI/CD) im Kontext der Datenverarbeitung.
- Modularität und Wiederverwendbarkeit: Mit DBT können Sie Transformationen in modularer Form erstellen, sodass sie leicht wiederverwendbar und wartbar sind. Dies verbessert die Konsistenz und Effizienz der Daten-Transformationen.
- Automatisierung: DBT automatisiert den Workflow von der Rohdatenverarbeitung bis zur Erstellung von Berichtsdaten. Es führt Transformationen in einer bestimmten Reihenfolge aus, basierend auf den Abhängigkeiten zwischen den verschiedenen Datenmodellen.
- Tests und Datenqualität: DBT unterstützt das Testen von Daten, um sicherzustellen, dass sie korrekt transformiert werden. Sie können Tests für Datenmodelle definieren, um Datenintegrität und -qualität zu gewährleisten.
- Dokumentation: DBT generiert automatisch eine Dokumentation der Datenmodelle, was für die Transparenz und das Verständnis der Datenverarbeitungsprozesse innerhalb eines Teams oder Unternehmens wichtig ist.
- Performance-Optimierung: Durch die effiziente Nutzung der Ressourcen moderner Data-Warehouses ermöglicht DBT eine schnelle Verarbeitung großer Datenmengen, was zur Leistungsoptimierung beiträgt.
- Integration mit Data-Warehouses: DBT ist kompatibel mit einer Vielzahl von modernen Data-Warehouses wie Snowflake, BigQuery, Redshift und anderen, was die Integration in bestehende Dateninfrastrukturen erleichtert.
Kurz gesagt, DBT hilft dabei, den Prozess der Daten-Transformation zu vereinfachen, zu automatisieren und zu verbessern, was zu effizienteren und zuverlässigeren Datenverarbeitungs-Workflows führt.
DBT kann sowohl in Cloud-Umgebungen als auch on-premises verwendet werden. Es ist nicht auf Cloud-Lösungen beschränkt. Die Kernfunktionalität von DBT, die auf der Kommandozeilen-Interface basiert, kann auf jedem System installiert und ausgeführt werden, das Python unterstützt. Somit ist DBT flexibel einsetzbar, unabhängig davon, ob Ihre Dateninfrastruktur in der Cloud oder in einem lokalen Rechenzentrum (on-premises) gehostet wird.
Wenn Sie DBT on-premises nutzen möchten, müssen Sie sicherstellen, dass Ihr Datenlager oder Ihre Datenbank on-premises unterstützt wird. DBT unterstützt eine Vielzahl von Datenbanken und Data-Warehouses, einschließlich solcher, die üblicherweise on-premises eingesetzt werden.
Es ist wichtig zu beachten, dass, während DBT selbst on-premises laufen kann, bestimmte Zusatzfunktionen oder -produkte, wie dbt Cloud, eine SaaS-Lösung darstellen, die spezifische Cloud-basierte Vorteile bietet, wie eine integrierte Entwicklungsumgebung und erweiterte Orchestrierungs- und Monitoring-Tools. Die Entscheidung, ob Sie DBT on-premises oder in der Cloud nutzen, hängt letztendlich von Ihrer spezifischen Dateninfrastruktur und Ihren geschäftlichen Anforderungen ab.
DBT (Data Build Tool) ist in seiner Kernversion ein Open-Source-Tool. Entwickelt von der Firma Fishtown Analytics (jetzt dbt Labs), ermöglicht es Analysten und Entwicklern, Transformationen in ihrem Data Warehouse durchzuführen, indem SQL-Code verwendet wird, der in einer Versionierungsumgebung verwaltet wird. Dies fördert die Anwendung von Softwareentwicklungspraktiken wie Code-Reviews und Versionskontrolle im Bereich der Datenanalyse.
Die Open-Source-Version von DBT kann kostenlos genutzt werden und bietet die grundlegenden Funktionen, die notwendig sind, um Daten-Transformationen zu definieren, zu testen und auszuführen. Es gibt auch eine kommerzielle Version, dbt Cloud, die zusätzliche Features bietet, wie eine Web-basierte IDE, erweiterte Scheduling-Optionen und bessere Team-Kollaborationswerkzeuge.
Das Open-Source-Projekt ist auf GitHub verfügbar, wo Benutzer den Code einsehen, eigene Beiträge leisten und die Entwicklung der Software verfolgen können. Dies fördert Transparenz und Gemeinschaftsbeteiligung, zwei Schlüsselaspekte der Open-Source-Philosophie.
Tools for Thought
Tools for Thought ist eine Übung in retrospektivem Futurismus, d. h. ich habe es Anfang der 1980er Jahre geschrieben, um zu sehen, wie die Mitte der 1990er Jahre aussehen würde. Meine Odyssee begann, als ich Xerox PARC und Doug Engelbart entdeckte und feststellte, dass all die Journalisten, die sich über das Silicon Valley hermachten, die wahre Geschichte verpassten. Ja, die Geschichten über Teenager, die in ihren Garagen neue Industrien erfanden, waren gut. Aber die Idee des Personal Computers ist nicht dem Geist von Steve Jobs entsprungen. Die Idee, dass Menschen Computer zur Erweiterung ihres Denkens und ihrer Kommunikation, als Werkzeuge für intellektuelle Arbeit und soziale Aktivitäten nutzen können, war keine Erfindung der Mainstream-Computerindustrie, der orthodoxen Computerwissenschaft oder gar der Computerbastler. Ohne Leute wie J.C.R. Licklider, Doug Engelbart, Bob Taylor und Alan Kay hätte es das nicht gegeben. Aber ihre Arbeit wurzelte in älteren, ebenso exzentrischen, ebenso visionären Arbeiten, und so habe ich mich damit beschäftigt, wie Boole und Babbage und Turing und von Neumann – vor allem von Neumann – die Grundlagen schufen, auf denen die späteren Erbauer von Werkzeugen aufbauten, um die Zukunft zu schaffen, in der wir heute leben. Man kann nicht verstehen, wohin sich die bewusstseinsverstärkende Technologie entwickelt, wenn man nicht weiß, woher sie kommt.
howard rheingold’s | tools for thought
Tools for Thought is an exercise in retrospective futurism; that is, I wrote it in the early 1980s, attempting to look at what the mid 1990s would be like. My odyssey started when I discovered Xerox PARC and Doug Engelbart and realized that all the journalists who had descended upon Silicon Valley were missing the real story. Yes, the tales of teenagers inventing new industries in their garages were good stories. But the idea of the personal computer did not spring full-blown from the mind of Steve Jobs. Indeed, the idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry nor orthodox computer science, nor even homebrew computerists. If it wasn’t for people like J.C.R. Licklider, Doug Engelbart, Bob Taylor, Alan Kay, it wouldn’t have happened. But their work was rooted in older, equally eccentric, equally visionary, work, so I went back to piece together how Boole and Babbage and Turing and von Neumann — especially von Neumann — created the foundations that the later toolbuilders stood upon to create the future we live in today. You can’t understand where mind-amplifying technology is going unless you understand where it came from.
howard rheingold’s | tools for thought
Staff Engineer
At most technology companies, you’ll reach Senior Software Engineer, the career level, in five to eight years. At that point your path branches, and you have the opportunity to pursue engineering management or continue down the path of technical excellence to become a Staff Engineer.
Over the past few years we’ve seen a flurry of books unlocking the engineering manager career path, like Camille Fournier’s The Manager’s Path, Julie Zhuo’s The Making of a Manager and my own An Elegant Puzzle. The management career isn’t an easy one, but increasingly there is a map available
Stories of reaching Staff-plus engineering roles – StaffEng | StaffEng
What is InterSystems IRIS?
Certainly! Below are useful links for each point to provide additional information and resources related to InterSystems IRIS:
1. Multi-Model Database
Learn more about how InterSystems IRIS supports multiple data models to suit various application needs.
2. High-Performance SQL
Explore the SQL capabilities of InterSystems IRIS, designed for high performance and efficiency.
3. Integrated Analytics
Discover the integrated analytics tools available in InterSystems IRIS for real-time data analysis.
4. Scalability and High Availability
Understand how InterSystems IRIS ensures scalability and high availability for mission-critical applications.
5. Interoperability
Find out about the extensive interoperability features of InterSystems IRIS, facilitating seamless connections with other systems and data sources.
6. Cloud-Native Deployment
Explore deployment options for InterSystems IRIS, including on-premises, cloud, and hybrid environments.
7. Advanced Security
Learn about the advanced security features of InterSystems IRIS designed to protect sensitive data and ensure authorized access.
8. Comprehensive Development Tools
Discover the development tools provided by InterSystems IRIS to enhance productivity and streamline application development.
9. Extensive Ecosystem
Connect with the InterSystems developer community and explore the ecosystem of partners and third-party tools.
10. Support for Various Programming Languages
Explore how developers can interact with InterSystems IRIS using various programming languages, including Java, .NET, Python, and others.
Key differences between Data Mesh and Data Fabric:
Data Mesh and Data Fabric are two distinct concepts in the field of data management, each addressing different aspects of modern data architecture and data governance. Here, I’ll describe the key differences between Data Mesh and Data Fabric:
1. Core Focus:
- Data Mesh:
Data Mesh primarily focuses on the organization’s approach to data ownership, decentralization, and democratization. It addresses the challenges of scaling data management within large organizations by emphasizing domain-specific data ownership and the distribution of data responsibilities to various teams or domains. - Data Fabric:
Data Fabric primarily focuses on data integration, abstraction, and seamless access. It provides a unified and flexible data management framework that allows organizations to integrate, access, and manage data across diverse sources, formats, and locations.
2. Data Ownership and Responsibility:
- Data Mesh:
In Data Mesh, domain-specific teams take ownership of their data products, including data quality, data processing, and data consumption. Each team is responsible for their domain’s data. - Data Fabric:
Data Fabric does not prescribe a specific approach to data ownership. It is more concerned with providing a unified and consistent view of data, regardless of who owns it. Data ownership may still be centralized or distributed based on the organization’s needs.
3. Data as a Product:
- Data Mesh:
Data in Data Mesh is treated as a product. Cross-functional data product teams are responsible for end-to-end data lifecycle management, including data generation, processing, and consumption. - Data Fabric:
While data management is an important aspect of Data Fabric, it doesn’t inherently focus on treating data as a product. Instead, it provides a framework for data integration and access, leaving the data management approach to the organization.
4. Data Platform vs. Data Architecture:
- Data Mesh:
Data Mesh often involves building data platforms that are owned and operated by data product teams. These platforms support the domain-specific data needs of each team. - Data Fabric:
Data Fabric is more of an architectural concept that encompasses data integration, abstraction, and access. It may involve the use of data platforms, but it is not inherently focused on building separate data platforms for each domain.
5. Cultural and Organizational Shift:
- Data Mesh:
Implementing Data Mesh often requires a significant cultural shift within the organization. It involves changes in how teams collaborate, communicate, and take ownership of data-related tasks. - Data Fabric:
Data Fabric is more about providing a technical framework for data management and integration. While it may influence data governance practices, it does not necessarily mandate a cultural shift to the same extent as Data Mesh.
6. Data Democratization:
- Data Mesh:
Data Mesh places a strong emphasis on democratizing data by allowing more teams and individuals to access and leverage data for their specific needs. - Data Fabric:
Data Fabric also supports data democratization by providing a unified and accessible data layer, but it does not inherently focus on democratization as its primary goal.
In summary, Data Mesh and Data Fabric are distinct approaches to addressing the challenges of modern data management. Data Mesh emphasizes decentralization, domain-specific ownership, and democratization of data, while Data Fabric focuses on data integration, abstraction, and providing a unified data layer. The choice between these concepts depends on an organization’s specific needs, culture, and data management goals.
Data Mesh? Data as product?
Data Mesh is a relatively new concept in the software world that addresses the challenges of managing and scaling data in modern, decentralized, and large-scale data environments. It was introduced by Zhamak Dehghani in a widely-cited 2020 article. Data Mesh proposes a paradigm shift in data architecture and organization by treating data as a product and applying principles from software engineering to data management. Here’s an overview of what Data Mesh means in the software world:
- Decentralized Ownership:
In a Data Mesh architecture, data is not the responsibility of a centralized data team alone. Instead, ownership and responsibility for data are distributed across different business units or „domains.“ Each domain is responsible for its data, including data quality, governance, and usage. - Data as a Product:
Data is treated as a product, much like software, with clear ownership and accountability. Data product teams are responsible for data pipelines, quality, and ensuring that data serves the needs of the consumers. - Domain-Oriented Data Ownership:
Each domain within an organization has its own data product teams. These teams understand the specific data needs of their domain and are responsible for the entire data lifecycle, from ingestion and transformation to serving data consumers. - Data Mesh Principles:
Data Mesh is built on four key principles:- Domain-Oriented Ownership: Domains own their data, making them accountable for its quality and usability.
- Self-serve Data Infrastructure: Data infrastructure is designed to be self-serve, allowing domain teams to manage their data pipelines.
- Product Thinking: Treat data as a product, with clear value propositions and consumers in mind.
- Federated Computational Governance: Governance and control are distributed, with a focus on enabling data consumers to make the most of the data while ensuring compliance and security.
- Data Democratization:
Data Mesh promotes data democratization by making data accessible to a broader range of users and teams within an organization. Self-service tools and well-documented data products empower users to access and analyze data without extensive technical knowledge. - Scaling Data:
Data Mesh is particularly relevant in large-scale and complex data ecosystems. It allows organizations to scale their data capabilities by distributing data ownership and enabling parallel development of data products. - Data Quality and Trust:
With clear ownership and accountability, Data Mesh encourages a focus on data quality, governance, and documentation. This, in turn, builds trust in the data and promotes its effective use. - Flexibility and Adaptability:
Data Mesh is adaptable to changing business needs and evolving data sources. It allows organizations to respond more quickly to data demands and opportunities. - Technology Stack:
Implementing a Data Mesh often involves the use of modern data technologies, data lakes, data warehouses, and microservices architecture. The technology stack should support the principles of Data Mesh and enable decentralized data ownership and management.
Data Mesh represents a shift in how organizations structure and manage their data to meet the challenges of the digital age. By distributing data ownership and treating data as a product, Data Mesh aims to improve data quality, accessibility, and usability while facilitating scalability and adaptability in the face of evolving data needs.