Renovate 70-767 download Guide

70-767 Royal Pack Testengine pdf

100% Actual & Verified — 100% PASS

Unlimited access to the world's largest Dumps library!

Download 70-767 Dumps Free

Ucertify offers free demo for 70-767 exam. "Implementing a SQL Data Warehouse (beta)", also known as 70-767 exam, is a Microsoft Certification. This set of posts, Passing the Microsoft 70-767 exam, will help you answer those questions. The 70-767 Questions & Answers covers all the knowledge points of the real exam. 100% real Microsoft 70-767 exams and revised by experts!


If you would certainly such as to recognize more concerning 70-767 exam, call us or merely visit us at our internet 2PASSEASY.COM site.

Q1. You are adding a new capability to several dozen SQL Server Integration Services (SSIS) packages.

The new capability is not available as an SSIS task. Each package must be extended with the same new capability.

You need to add the new capability to all the packages without copying the code between packages.

What should you do?

A. Use the Expression task.

B. Use the Script task.

C. Develop a custom task.

D. Use the Script component,

E. Develop a custom component.

Answer:

Explanation: References:

http://msdn.microsoft.com/en-us/library/ms135965.aspx http://msdn.microsoft.com/en-us/library/ms345161.aspx


Q2. DRAG DROP

You are designing an extract, transform, load (ETL) process with SQL Server Integration Services (SSIS). Two packages, Package A and Package B, will be designed. Package A will execute Package B.

Both packages must reference a file path corresponding to an input folder where files will be located for further processing.

You need to design a solution so that the file path can be easily configured with the least administrative and development effort.

Which four actions should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)

Answer:


Q3. You administer a SQL Server Integration Services (SSIS) solution in the SSIS catalog. A SQL Server Agent job is used to execute a package daily with the basic logging level.

Recently, the package execution failed because of a primary key violation when the package inserted data into the destination table.

You need to identify all previous times that the package execution failed because of a primary key violation.

What should you do?

A. Use an event handler for OnError for the package.

B. Use an event handler for OnError for each data flow task.

C. Use an event handler for OnTaskFailed for the package.

D. View the job history for the SQL Server Agent job.

E. View the All Messages subsection of the All Executions report for the package.

F. Store the System::SourceID variable in the custom log table.

G. Store the System::ServerExecutionID variable in the custom log table.

H. Store the System::ExecutionInstanceGUID variable in the custom log table.

I. Enable the SSIS log provider for SQL Server for OnError in the package control flow.

J. Enable the SSIS log provider for SQL Server for OnTaskFailed in the package control flow.

K. Deploy the project by using dtutil.exe with the /COPY DTS option.

L. Deploy the project by using dtutil.exe with the /COPY SQL option.

M. Deploy the .ispac file by using the Integration Services Deployment Wizard.

N. Create a SQL Server Agent job to execute the SSISDB.catalog.validate_project stored procedure.

O. Create a SQL Server Agent job to execute the SSISDB.catalog.validate_package stored procedure.

P. Create a SQL Server Agent job to execute the

SSISDB.catalog.create_execution and SSISDB.catalog.start_execution stored procedures.

Q. Create a table to store error information. Create an error output on each data flow destination that writes OnError event text to the table.

R. Create a table to store error information. Create an error output on each data flow destination that writes OnTaskFailed event text to the table.

Answer: E


Q4. You are the data steward for a Business Intelligence project.

You must identify duplicate rows stored in a SQL Server table and output discoveries to a CSV file. A Data Quality Services (DQS) knowledge base has been created to support this project.

You need to produce the CSV file with the least amount of development effort. What should you do?

A. Create an Integration Services package and use a Data Profiling transform.

B. Create a custom .NET application based on the Knowledgebase class.

C. Create a data quality project.

D. Create a CLR stored procedure based on the Knowledgebase class.

E. Create a Master Data Services (MDS) business rule.

Answer: C

Explanation: 

Reference:

http://msdn.microsoft.com/en-us/library/hh213052.aspx


Q5. Occasionally a job that executes an existing SQL Server Integration Services (SSIS) package does not complete and nothing is processed.

You need to ensure that package logging occurs. Your solution must minimize deployment and development efforts.

What should you do?

A. Create a reusable custom logging component.

B. Use the gacutil command.

C. Use the Project Deployment Wizard.

D. Run the package by using the dtexec /rep /conn command.

E. Add a data tap on the output of a component in the package data flow.

F. Create an OnError event handler.

G. Use the dtutil /copy command.

H. Deploy the package by using an msi file.

I. Run the package by using the dtexec /dumperror /conn command.

J. Run the package by using the dtexecui.exe utility and the SQL Log provider.

K. Deploy the package to the Integration Services catalog by using dtutil and use SQL Server to store the configuration.

Answer:

Explanation: References:

http://msdn.microsoft.com/en-us/library/ms140246.aspx http://msdn.microsoft.com/en-us/library/hh231187.aspx


Q6. DRAG DROP

You are creating a sales data warehouse. When a product exists in the product dimension, you update the product name. When a product does not exist, you insert a new record.

In the current implementation, the DimProduct table must be scanned twice, once for the insert and again for the update. As a result, inserts and updates to the DimProduct table take longer than expected.

You need to create a solution that uses a single command to perform an update and an insert.

How should you use a MERGE T-SQL statement to accomplish this goal? (To answer, drag the appropriate answer choice from the list of options to the correct location or locations in the answer area.

You may need to drag the split bar between panes or scroll to view content.)

Answer:

Explanation:

References:

http://msdn.microsoft.com/en-us/library/bb510625.aspx http://msdn.microsoft.com/en-us/library/cc280522.aspx


Q7. A SQL Server Integration Services (SSIS) package imports daily transactions from several files into a SQL Server table named Transaction. Each file corresponds to a different store and is imported in parallel with the other files. The data flow tasks use OLE DB destinations in fast load data access mode.

The number of daily transactions per store can be very large and is growing. The Transaction table does not have any indexes.

You need to minimize the package execution time. What should you do?

A. Partition the table by day and store.

B. Create a clustered index on the Transaction table.

C. Run the package in Performance mode.

D. Increase the value of the Row per Batch property.

Answer: D

Explanation: * Data Access Mode – This setting provides the 'fast load' option which internally uses a BULK INSERT statement for uploading data into the destination table instead of a simple INSERT statement (for each single row) as in the case for other options.

* BULK INSERT parameters include: ROWS_PER_BATCH =rows_per_batch

Indicates the approximate number of rows of data in the data file.

By default, all the data in the data file is sent to the server as a single transaction, and the number of rows in the batch is unknown to the query optimizer. If you specify ROWS_PER_BATCH (with a value > 0) the server uses this value to optimize the bulk- import operation. The value specified for ROWS_PER_BATCH should approximately the same as the actual number of rows.


Q8. You develop a SQL Server Integration Services (SSIS) package that imports SQL Azure data into a data warehouse every night.

The SQL Azure data contains many misspellings and variations of abbreviations. To import the data, a developer used the Fuzzy Lookup transformation to choose the closest- matching string from a reference table of allowed values. The number of rows in the reference table is very large.

If no acceptable match is found, the Fuzzy Lookup transformation passes a null value. The current setting for the Fuzzy Lookup similarity threshold is 0.50.

Many values are incorrectly matched.

You need to ensure that more accurate matches are made by the Fuzzy Lookup transformation without degrading performance.

What should you do?

A. Decrease the maximum number of matches per lookup.

B. Change the similarity threshold to 0.55.

C. Change the Exhaustive property to True.

D. Increase the maximum number of matches per lookup.

Answer: B

Explanation: 

Reference: http://msdn.microsoft.com/en-us/library/ms137786.aspx


Q9. DRAG DROP

You develop a SQL Server Integration Services (SSIS) project by using the Project Deployment model.

The project contains many packages. It is deployed on a server named Development1. The project will be deployed to several servers that run SQL Server 2016.

The project accepts one required parameter. The data type of the parameter is a string.

A SQL Agent job is created that will call the master.dtsx package in the project. A job step is created for the SSIS package.

The job must pass the value of an SSIS Environment Variable to the project parameter. The value of the Environment Variable must be configured differently on each server that

runs SQL Server. The value of the Environment Variable must provide the server name to the project parameter.

You need to configure SSIS on the Development1 server to pass the Environment Variable to the package.

Which four actions should you perform in sequence by using SQL Server Management Studio? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)

Answer:

Explanation:

References:

http://msdn.microsoft.com/en-us/library/hh479588.aspx http://msdn.microsoft.com/en-us/library/hh213230.aspx http://msdn.microsoft.com/en-us/library/hh213214.aspx http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx


Q10. DRAG DROP

You are developing a SQL Server Integration Services (SSIS) package that imports data into a data warehouse. You are developing the part of the SSIS package that populates the ProjectDates dimension table.

The business key of the ProjectDates table is the ProjectName column. The business user has given you the dimensional attribute behavior for each of the four columns in the ProjectDates table:

•ExpectedStartDate - New values should be tracked over time.

•ActualStartDate - New values should not be accepted.

•ExpectedEndDate - New values should replace existing values.

•ActualEndDate - New values should be tracked over time.

You use the SSIS Slowly Changing Dimension Transformation.

You must configure the Change Type value for each source column.

Which settings should you select? (To answer, select the appropriate setting or settings in the answer area. Each Change Type may be used once, more than once, or not at all.)

Answer:

Explanation:

References:

http://msdn.microsoft.com/en-us/library/ms141715.aspx http://msdn.microsoft.com/en-us/library/ms141662.aspx


Q11. DRAG DROP

You are loading a dataset into SQL Server. The dataset contains numerous duplicates for the Artist and Song columns.

The values in the Artist column in the dataset must exactly match the values in the Artist domain in the knowledge base. The values in the Song column in the dataset can be a close match with the values in the Song domain.

You need to use SQL Server Data Quality Services (DQS) to define a matching policy rule to identify duplicates.

How should you configure the Rule Editor? (To answer, drag the appropriate answers to the answer area.)

Answer:


Q12. You are using SQL Server Data Tools to develop a SQL Server Integration Services (SSIS) project.

The first package that you create in this project contains a package connection that accesses a flat file. Additional packages in the project must also access this file.

You need to define and reuse the flat file connection in all project packages. What should you do?

A. Convert the package Connection Manager in the first package to a project Connection Manager.

B. Copy the package Connection Manager and paste it into the second package.

C. Convert the project to the Package Deployment model.

D. Set the ProtectionLevel property of the package Connection Manager to DontSaveSensitive to reuse the flat file connection.

Answer: A


Q13. HOTSPOT

You are the Master Data Services (MDS) administrator at your company.

An existing user must be denied access to a certain hierarchy node for an existing model. You need to configure the user's permissions.

Which user management menu item should you select? (To answer, configure the appropriate option or options in the dialog box in the answer area.)

Answer:


Q14. You are creating a SQL Server Integration Services (SSIS) package to retrieve product data from two different sources. One source is hosted in a SQL Azure database. Each source contains products for different distributors.

Products for each distributor source must be combined for insertion into a single product table destination.

You need to select the appropriate data flow transformation to meet this requirement.

Which transformation types should you use? (Each answer represents a complete solution. Choose all that apply.)

A. Slowly Changing Dimension

B. pivot

C. Lookup

D. Union All

E. Merge

Answer: D,E


Q15. You are a database developer of a Microsoft SQL Server 2016 database. You are designing a table that will store Customer data from different sources. The table will include a column that contains the CustomerID from the source system and a column that contains the SourceID. A sample of this data is as shown in the following table. You need to ensure that the table has no duplicate CustomerID within a SourceID. You also need to ensure that the data in the table is in the order of SourceID and then CustomerID.

Which Transact- SQL statement should you use?

A. CREATE TABLE Customer (SourceID int NOT NULL IDENTITY,

CustomerID int NOT NULL IDENTITY, CustomerName varchar(255) NOT NULL);

B. CREATE TABLE Customer (SourceID int NOT NULL,

CustomerID int NOT NULL PRIMARY KEY CLUSTERED,

CustomerName varchar(255) NOT NULL);

C. CREATE TABLE Customer

(SourceID int NOT NULL PRIMARY KEY CLUSTERED,

CustomerID int NOT NULL UNIQUE, CustomerName varchar(255) NOT NULL);

D. CREATE TABLE Customer (SourceID int NOT NULL, CustomerID int NOT NULL,

CustomerName varchar(255) NOT NULL,

CONSTRAINT PK_Customer PRIMARY KEY CLUSTERED

(SourceID, CustomerID));

Answer: D


Q16. You are designing a SQL Server Integration Services (SSIS) package that uses the Fuzzy Lookup transformation.

The reference data to be used in the transformation does not change.

You need to reuse the Fuzzy Lookup match index to increase performance and reduce maintenance.

What should you do?

A. Select the GenerateAndPersistNewIndex option in the Fuzzy Lookup Transformation Editor.

B. Select the GenerateNewIndex option in the Fuzzy Lookup Transformation Editor.

C. Select the DropExistingMatchlndex option in the Fuzzy Lookup Transformation Editor.

D. Execute the sp_FuzzyLookupTableMaintenanceUninstall stored procedure.

E. Execute the sp_FuzzyLookupTableMaintenanceInvoke stored procedure.

Answer: A

Explanation: 

Reference: http://msdn.microsoft.com/en-us/library/ms137786.aspx


Q17. DRAG DROP

You are validating whether a SQL Server Integration Services (SSIS) package named Master.dtsx in the SSIS catalog is executing correctly.

You need to display the number of rows in each buffer passed between each data flow component of the package.

Which three actions should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)

Answer:

Explanation:

1. Run with verbose settings

2. Get Execution ID from .executions

3. Get stats from .execution_data_statistics ( rows_sent ) Ref: http://msdn.microsoft.com/en-us/library/hh230986.aspx