User Interface Principles – Win32 apps | Microsoft Learn
Designing a User Interface Designing a User Interface – Win32 apps | Microsoft Learn
Selecting the optimal cloud target to migrate SQL estates from
p3509-zhu.pdf (vldb.org)
on-premises to the cloud remains a challenge. Current solutions
are not only time-consuming and error-prone, requiring signifi
cant user input, but also fail to provide appropriate recommen
dations. We present Doppler, a scalable recommendation engine
that provides right-sized Azure SQL Platform-as-a-Service (PaaS)
recommendations without requiring access to sensitive customer
data and queries. Doppler introduces a novel price-performance
methodology that allows customers to get a personalized rank of
relevant cloud targets solely based on low-level resource statistics,
such as latency and memory usage. Doppler supplements this rank
with internal knowledge of Azure customer behavior to help guide
newmigration customers towards one optimal target. Experimental
results over a 9-month period from prospective and existing cus
tomers indicate that Doppler can identify optimal targets and adapt
to changes in customer workloads. It has also found cost-saving
opportunities among over-provisioned cloud customers, without
compromising on capacity or other requirements. Doppler has been
integrated and released in the Azure Data Migration Assistant v5.5,
which receives hundreds of assessment requests daily.
Abstract
2401.09621.pdf (arxiv.org)
arXiv:2401.09621v1 [cs.DB] 17 Jan 2024
Contemporary approaches to data management are increasingly
relying on unified analytics and AI platforms to foster collabora
tion, interoperability, seamless access to reliable data, and high
performance. Data Lakes featuring open standard table formats
such as Delta Lake, Apache Hudi, and Apache Iceberg are central
components of these data architectures. Choosing the right format
for managing a table is crucial for achieving the objectives men
tioned above. The challenge lies in selecting the best format, a task
that is onerous and can yield temporary results, as the ideal choice
may shift over time with data growth, evolving workloads, and the
competitive development of table formats and processing engines.
Moreover, restricting data access to a single format can hinder data
sharing resulting in diminished business value over the long term.
The ability to seamlessly interoperate between formats and with
negligible overhead can effectively address these challenges. Our
solution in this direction is an innovative omni-directional transla
tor, XTable, that facilitates writing data in one format and reading
it in any format, thus achieving the desired format interoperability.
In this work, we demonstrate the effectiveness of XTable through
application scenarios inspired by real-world use cases
How to Detect Locking and Blocking In Your Analysis Services Environment – byoBI.com (wordpress.com)
3 Methods for Shredding Analysis Services Extended Events – byoBI.com (wordpress.com)
Introduction To Analysis Services Extended Events – Mark Vaillancourt (markvsql.com)
<!-- This script supplied by Bill Anton http://byobi.com/blog/2013/06/extended-events-for-analysis-services/ -->
<Create
xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2"
xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2"
xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100"
xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200"
xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
<ObjectDefinition>
<Trace>
<ID>MyTrace</ID>
<!--Example: <ID>QueryTuning_20130624</ID>-->
<Name>MyTrace</Name>
<!--Example: <Name>QueryTuning_20130624</Name>-->
<ddl300_300:XEvent>
<event_session name="xeas"
dispatchLatency="1"
maxEventSize="4"
maxMemory="4"
memoryPartitionMode="none"
eventRetentionMode="allowSingleEventLoss"
trackCausality="true">
<!-- ### COMMAND EVENTS ### -->
<!--<event package="AS" name="CommandBegin" />-->
<!--<event package="AS" name="CommandEnd" />-->
<!-- ### DISCOVER EVENTS ### -->
<!--<event package="AS" name="DiscoverBegin" />-->
<!--<event package="AS" name="DiscoverEnd" />-->
<!-- ### DISCOVER SERVER STATE EVENTS ### -->
<!--<event package="AS" name="ServerStateDiscoverBegin" />-->
<!--<event package="AS" name="ServerStateDiscoverEnd" />-->
<!-- ### ERRORS AND WARNING ### -->
<!--<event package="AS" name="Error" />-->
<!-- ### FILE LOAD AND SAVE ### -->
<!--<event package="AS" name="FileLoadBegin" />-->
<!--<event package="AS" name="FileLoadEnd" />-->
<!--<event package="AS" name="FileSaveBegin" />-->
<!--<event package="AS" name="FileSaveEnd" />-->
<!--<event package="AS" name="PageInBegin" />-->
<!--<event package="AS" name="PageInEnd" />-->
<!--<event package="AS" name="PageOutBegin" />-->
<!--<event package="AS" name="PageOutEnd" />-->
<!-- ### LOCKS ### -->
<!--<event package="AS" name="Deadlock" />-->
<!--<event package="AS" name="LockAcquired" />-->
<!--<event package="AS" name="LockReleased" />-->
<!--<event package="AS" name="LockTimeout" />-->
<!--<event package="AS" name="LockWaiting" />-->
<!-- ### NOTIFICATION EVENTS ### -->
<!--<event package="AS" name="Notification" />-->
<!--<event package="AS" name="UserDefined" />-->
<!-- ### PROGRESS REPORTS ### -->
<!--<event package="AS" name="ProgressReportBegin" />-->
<!--<event package="AS" name="ProgressReportCurrent" />-->
<!--<event package="AS" name="ProgressReportEnd" />-->
<!--<event package="AS" name="ProgressReportError" />-->
<!-- ### QUERY EVENTS ### -->
<!--<event package="AS" name="QueryBegin" />-->
<event package="AS" name="QueryEnd" />
<!-- ### QUERY PROCESSING ### -->
<!--<event package="AS" name="CalculateNonEmptyBegin" />-->
<!--<event package="AS" name="CalculateNonEmptyCurrent" />-->
<!--<event package="AS" name="CalculateNonEmptyEnd" />-->
<!--<event package="AS" name="CalculationEvaluation" />-->
<!--<event package="AS" name="CalculationEvaluationDetailedInformation" />-->
<!--<event package="AS" name="DaxQueryPlan" />-->
<!--<event package="AS" name="DirectQueryBegin" />-->
<!--<event package="AS" name="DirectQueryEnd" />-->
<!--<event package="AS" name="ExecuteMDXScriptBegin" />-->
<!--<event package="AS" name="ExecuteMDXScriptCurrent" />-->
<!--<event package="AS" name="ExecuteMDXScriptEnd" />-->
<!--<event package="AS" name="GetDataFromAggregation" />-->
<!--<event package="AS" name="GetDataFromCache" />-->
<!--<event package="AS" name="QueryCubeBegin" />-->
<!--<event package="AS" name="QueryCubeEnd" />-->
<!--<event package="AS" name="QueryDimension" />-->
<!--<event package="AS" name="QuerySubcube" />-->
<!--<event package="AS" name="ResourceUsage" />-->
<!--<event package="AS" name="QuerySubcubeVerbose" />-->
<!--<event package="AS" name="SerializeResultsBegin" />-->
<!--<event package="AS" name="SerializeResultsCurrent" />-->
<!--<event package="AS" name="SerializeResultsEnd" />-->
<!--<event package="AS" name="VertiPaqSEQueryBegin" />-->
<!--<event package="AS" name="VertiPaqSEQueryCacheMatch" />-->
<!--<event package="AS" name="VertiPaqSEQueryEnd" />-->
<!-- ### SECURITY AUDIT ### -->
<!--<event package="AS" name="AuditAdminOperationsEvent" />-->
<event package="AS" name="AuditLogin" />
<!--<event package="AS" name="AuditLogout" />-->
<!--<event package="AS" name="AuditObjectPermissionEvent" />-->
<!--<event package="AS" name="AuditServerStartsAndStops" />-->
<!-- ### SESSION EVENTS ### -->
<!--<event package="AS" name="ExistingConnection" />-->
<!--<event package="AS" name="ExistingSession" />-->
<!--<event package="AS" name="SessionInitialize" />-->
<target package="Package0" name="event_file">
<!-- Make sure SSAS instance Service Account can write to this location -->
<parameter name="filename" value="C:\SSASExtendedEvents\MyTrace.xel" />
<!--Example: <parameter name="filename" value="C:\Program Files\Microsoft SQL Server\MSAS11.SSAS_MD\OLAP\Log\trace_results.xel" />-->
</target>
</event_session>
</ddl300_300:XEvent>
</Trace>
</ObjectDefinition>
</Create>
/****
Base query provided by Francesco De Chirico
http://francescodechirico.wordpress.com/2012/08/03/identify-storage-engine-and-formula-engine-bottlenecks-with-new-ssas-xevents-5/
****/
SELECT
xe.TraceFileName
, xe.TraceEvent
, xe.EventDataXML.value('(/event/data[@name="EventSubclass"]/value)[1]','int') AS EventSubclass
, xe.EventDataXML.value('(/event/data[@name="ServerName"]/value)[1]','varchar(50)') AS ServerName
, xe.EventDataXML.value('(/event/data[@name="DatabaseName"]/value)[1]','varchar(50)') AS DatabaseName
, xe.EventDataXML.value('(/event/data[@name="NTUserName"]/value)[1]','varchar(50)') AS NTUserName
, xe.EventDataXML.value('(/event/data[@name="ConnectionID"]/value)[1]','int') AS ConnectionID
, xe.EventDataXML.value('(/event/data[@name="StartTime"]/value)[1]','datetime') AS StartTime
, xe.EventDataXML.value('(/event/data[@name="EndTime"]/value)[1]','datetime') AS EndTime
, xe.EventDataXML.value('(/event/data[@name="Duration"]/value)[1]','bigint') AS Duration
, xe.EventDataXML.value('(/event/data[@name="TextData"]/value)[1]','varchar(max)') AS TextData
FROM
(
SELECT
[FILE_NAME] AS TraceFileName
, OBJECT_NAME AS TraceEvent
, CONVERT(XML,Event_data) AS EventDataXML
FROM sys.fn_xe_file_target_read_file ( 'C:\SSASExtendedEvents\MyTrace*.xel', null, null, null )
) xe
DBT (Data Build Tool) ist ein Kommandozeilen-Tool, das Software-Entwicklern und Datenanalysten hilft, ihre Datenverarbeitungs-Workflows effizienter zu gestalten. Es wird verwendet, um Daten-Transformationen zu definieren, zu testen und zu orchestrieren, die in modernen Data-Warehouses ausgeführt werden. Hier sind einige Schlüsselaspekte, die erklären, welchen Zweck DBT im Daten-Umfeld hat:
Kurz gesagt, DBT hilft dabei, den Prozess der Daten-Transformation zu vereinfachen, zu automatisieren und zu verbessern, was zu effizienteren und zuverlässigeren Datenverarbeitungs-Workflows führt.
DBT kann sowohl in Cloud-Umgebungen als auch on-premises verwendet werden. Es ist nicht auf Cloud-Lösungen beschränkt. Die Kernfunktionalität von DBT, die auf der Kommandozeilen-Interface basiert, kann auf jedem System installiert und ausgeführt werden, das Python unterstützt. Somit ist DBT flexibel einsetzbar, unabhängig davon, ob Ihre Dateninfrastruktur in der Cloud oder in einem lokalen Rechenzentrum (on-premises) gehostet wird.
Wenn Sie DBT on-premises nutzen möchten, müssen Sie sicherstellen, dass Ihr Datenlager oder Ihre Datenbank on-premises unterstützt wird. DBT unterstützt eine Vielzahl von Datenbanken und Data-Warehouses, einschließlich solcher, die üblicherweise on-premises eingesetzt werden.
Es ist wichtig zu beachten, dass, während DBT selbst on-premises laufen kann, bestimmte Zusatzfunktionen oder -produkte, wie dbt Cloud, eine SaaS-Lösung darstellen, die spezifische Cloud-basierte Vorteile bietet, wie eine integrierte Entwicklungsumgebung und erweiterte Orchestrierungs- und Monitoring-Tools. Die Entscheidung, ob Sie DBT on-premises oder in der Cloud nutzen, hängt letztendlich von Ihrer spezifischen Dateninfrastruktur und Ihren geschäftlichen Anforderungen ab.
DBT (Data Build Tool) ist in seiner Kernversion ein Open-Source-Tool. Entwickelt von der Firma Fishtown Analytics (jetzt dbt Labs), ermöglicht es Analysten und Entwicklern, Transformationen in ihrem Data Warehouse durchzuführen, indem SQL-Code verwendet wird, der in einer Versionierungsumgebung verwaltet wird. Dies fördert die Anwendung von Softwareentwicklungspraktiken wie Code-Reviews und Versionskontrolle im Bereich der Datenanalyse.
Die Open-Source-Version von DBT kann kostenlos genutzt werden und bietet die grundlegenden Funktionen, die notwendig sind, um Daten-Transformationen zu definieren, zu testen und auszuführen. Es gibt auch eine kommerzielle Version, dbt Cloud, die zusätzliche Features bietet, wie eine Web-basierte IDE, erweiterte Scheduling-Optionen und bessere Team-Kollaborationswerkzeuge.
Das Open-Source-Projekt ist auf GitHub verfügbar, wo Benutzer den Code einsehen, eigene Beiträge leisten und die Entwicklung der Software verfolgen können. Dies fördert Transparenz und Gemeinschaftsbeteiligung, zwei Schlüsselaspekte der Open-Source-Philosophie.
StarRocks is a Native Vectorized Engine implemented in C++, while Trino is implemented in Java and uses limited vectorization technology. Vectorization technology helps StarRocks utilize CPU processing power more efficiently. This type of query engine has the following characteristics:
- It can fully utilize the efficiency of columnar data management. This type of query engine reads data from columnar storage, and the way they manage data in memory, as well as the way operators process data, is columnar. Such engines can use the CPU cache more effectively, improving CPU execution efficiency.
- It can fully utilize the SIMD instructions supported by the CPU. This allows the CPU to complete more data calculations in fewer clock cycles. According to data provided by StarRocks, using vectorized instructions can improve overall performance by 3-10 times.
- It can compress data more efficiently to greatly reduce memory usage. This makes this type of query engine more capable of handling large data volume query requests.
In fact, Trino is also exploring vectorization technology. Trino has some SIMD code, but it’s behind compared to StarRocks in terms of depth and coverage. Trino is still working on improving their vectorization efforts (read https://github.com/trinodb/trino/issues/14237). Meta’s Velox project aims to use vectorization technology to accelerate Trino queries. However, so far, very few companies have formally used Velox in production environments.
Comparison of the Open Source Query Engines: Trino and StarRocks
Comparison of the Open Source Query Engines: Trino and StarRocks
Apache Trino (formerly known as PrestoSQL) and Dremio are both distributed query engines, but they are designed with different architectures and use cases in mind. Here’s a comparison of the two:
Choosing between Trino and Dremio depends on the specific needs of the organization. If the primary need is fast, ad-hoc query execution across diverse data sources, Trino might be the better choice. If there’s a requirement for a comprehensive data platform with features like data curation, cataloging, and acceleration, Dremio could be more suitable.
Apache Flink and Apache Trino (formerly known as PrestoSQL) are both distributed processing systems, but they are designed for different types of workloads and use cases in the big data ecosystem. Here’s a breakdown of their primary differences:
Choosing between Flink and Trino depends on the specific requirements of the workload, such as the need for real-time processing, the complexity of the queries, the size of the data, and the latency requirements.