PRIDE MONTH ALERT : FLAT 20% OFF On Our Best-Selling Courses Use - PRIDE20
SQL Server Analysis Services (SSAS) is the innovation from the Microsoft Business Intelligence stack, to create Online Analytical Processing (OLAP) arrangements. In basic terms, you can utilize SSAS to make blocks utilizing information from data stores/information distribution centers for more profound and quicker information analytics.
The specialists with the learning relating to Agile testing are in immense demand these days. In case you are someone who is most likely going to go for an interview in which you would be asked questions based on SSAS then please go through the list of questions that have been written in this blog. These are the most commonly asked SSAS interview questions and answers for experienced & freshers that have been doing the rounds in various meeting sessions. We are hopeful that these questions would help you in an astoundingly noteworthy way.
For any person who is foreseeing to go to an interview session subject to SSAS at any point in the near future, here are the most common questions and their responses to help you in the right way for your imminent interview session. In the wake of taking insights from various understudies who have appeared in SSAS interviews as of late, we have organized a rundown of the most commonly asked SSAS interview questions by the hiring managers along with their answers to energize the fresher and the accomplished people for their interview sessions.
SQL Server Training & Certification
SQL Server Analysis Services (SSAS) is the On-Line Analytical Processing (OLAP) Component of SQL Server. SSAS enables you to manufacture multidimensional structures called Cubes to pre-compute and store complex conglomerations, and furthermore to construct mining models to perform information examination to recognize profitable data like patterns, designs, connections, and so forth inside the information utilizing Data Mining capacities of SSAS, which generally could be extremely hard to decide without Data Mining abilities.
OLAP is the abbreviation for On-Line Analytical Processing. It is a capacity or an arrangement of devices that empowers the end clients to effortlessly and successfully get to the information distribution center’s data utilizing an extensive variety of instruments like Microsoft Excel, Reporting Services, and numerous other outsider business intelligence apparatuses.
OLAP is utilized for investigation purposes to help everyday business choices and is described by less continuous information refreshes and contains verifiable information. Though an OLTP (On-Line Transaction Processing) is utilized to help everyday business tasks and is described by continuous information updates and contains the latest information alongside restricted authentic information, dependent on the maintenance approach driven by business needs.
A Data Source contains the association data utilized by SSAS to interface with the hidden database to stack the information into SSAS during the preparation. A Data Source basically contains the accompanying data (aside from different properties like Query timeout, Isolation and so on.):
SSAS Supports both.Net and OLE DB Providers. Following are a portion of the significant sources bolstered by SSAS: SQL Server, MS Access, Oracle, Teradata, IBM DB2, and other social databases with the fitting OLE DB supplier.
Impersonation is the process that enables SSAS to expect the personality/security setting of the customer application which is utilized by SSAS to play out the server-side information tasks like information access, preparing, and so forth.
A Data Source View (DSV) is a consistent perspective of the hidden database pattern and offers a layer of deliberation for the fundamental database mapping. This layer goes about as a hotspot for SSAS and catches the blueprint-related data from the basic database.
The schematic information present in DSV includes the following points:
A Named Calculation is another segment added to a Table in DSV and depends on an articulation. This capacity enables you to include an additional section into your DSV which depends on at least one segment from hidden information source Table(s)/View(s) joined together utilizing an articulation without requiring the option of a physical segment in the fundamental database Table(s)/View(s
A significant number of the UIs/planners/wizards in BIDS which are a piece of an SSAS venture rely upon the Primary Key and Relationships among Fact and Dimension tables. Thus it is essential to characterize the Primary Key and Relationships in DSV.
If you are applying for senior-level positions, here are the popular SSAS interview questions and answers for experienced professionals often asked!
A data mart is a subset of a hierarchical information store, generally arranged to a particular purpose or real information subject that might be disseminated to help business needs. Data marts are scientific information stores intended to center around particular business capacities for an explicit network inside an association. Information stores are regularly accessed from subsets of information in an information distribution center, however in the base up information stockroom plan strategy, the information stockroom is made from the association of hierarchical data marts.
A dimension table contains various hierarchical information by which you'd like to condense. A dimension table contains explicit business data, a dimension table that contains the particular name of every individual from the dimension. The name of the dimension part is called a "property". The key characteristic in the dimension must contain a novel incentive for every individual from the dimension. This key property is designated "essential key section". The essential key section of each dimension table is compared to one of the key segments in any related actuality table.
A fact table contains the essential data that you wish to outline. The table that stores the nitty-gritty incentives for the measure is called the fact table.
The "Factless Fact Table" is a table that is like a Fact Table with the exception of having any measure; implying that this table simply has the connections to the measurements. These tables empower you to follow occasions; undoubtedly they are for account occasions. Factless actuality tables are utilized for following a procedure or gathering details. They are called so on the grounds that the reality table does not have aggregatable numeric qualities or data. They are unimportant key qualities with reference to the measurements from which the details can be gathered.
The snowflake mapping is an expansion of the star pattern, where each purpose of the star detonates into more focus. In a star pattern, each dimension is spoken to by a solitary dimensional table, while in a snowflake construction, that dimensional table is standardized into various query tables, each speaking to a dimension in the dimensional progressive system. In snowdrop mapping, the actuality table will be connected straightforwardly and there will be some transitional dimension tables among certainty and dimension tables.
In Analysis Service we, for the most part, observe that all dimensions have all parts. This is a direct result of an IsAggregatable property of the trait. You can set its incentive to false, with the goal that it won't demonstrate all parts. It's the default part for that characteristic. On the off chance that you shroud this part then you should set others to ascribe an incentive to default part else it will pick some an incentive as default and this will make perplexity in perusing the information in the event that somebody isn't known to change in default part.
These measure groups can contain distinctive measurements and be at various granularity yet inasmuch as you show your 3D square effectively, your clients will have the capacity to utilize measures from every one of these measure bunches in their questions effortlessly and without agonizing over the fundamental multifaceted nature.
SQL Server Training & Certification
A surrogate key is an SQL-created key that acts like another essential key for the table in the database. Data distribution centers normally utilize a surrogate key to particularly recognize an element. A surrogate isn't produced by the client yet by the framework. An essential contrast between an essential key and surrogate key in a couple of databases is that the essential key particularly distinguishes a record while a Surrogate key exceptionally recognizes an element.
In Analysis Services, a KPI is a gathering of estimations that are related to a measure amassed in a 3D square that is utilized to assess business achievement. We utilize KPI to see the business at the specific point, to represent with some graphical things, for example, activity signals, ganze and so on
Perspective is an approach to diminish the intricacy of blocks by shrouded components like measure gatherings, measures, measurements, progressive systems and so forth. It's only cutting off a shape, for example, if we are having retail and clinic information and the end client is brought in to see just doctor's facility information, at that point we can make a perspective as per it.
In SSAS, impersonation helps to assume the identity/security context of the particular client application to perform server-side data operations. The data side operations include data access, processing, etc. There are multiple options available in SSAS mentioned below:
Data Source in SSAS, includes the connectivity information to connect to the underlying database to load the data for processing. It consists of the following information.
SSAS provides its intense support to OLE DB and .Net providers. Apart from these, the significant sources that are supported by SSAS are Teradata, Ms Access, Oracle, IBM DB2, and many other relational databases.
To perform OLAP and Data mining functionalities, SSAS makes use of client and server components.
The server part is considered as a Microsoft Windows service. Every instance is calculated to be a separate instance of the Windows service.
Clients interact with SSAS using some protocols which allow them to access any standard multi-dimensional data. One of the protocols used is XML for Analysis.
UDM stands for the Unified Dimensional Model in SSAS. It is a bridge between the data sources and the users. It helps in the collection of data from all the different available sources to one single model.UDM consists of different components like Data Source, Data Source View, and Dimensional model.
Read more on Online SQL Server Training.
SSAS uses a standard, automated engine called OLAP engine that will allow users to explore the data interactively. OLAP engine helps in enabling fast ad-hoc queries by the end-users. Exploring data can be done using Drilling, slicing, and pivoting.
FASMI refers to Fast Analysis Shared Multi-dimensional Information. FASMI is an alternative term for OLAP. A database is an OLAP database if the database follows and satisfies the FASMI rules.
Dimensions in SSAS includes the group of attributes, represented in the form of columns in a table or as views. Dimensions can be used multiple times in a cube, or many cubes and interlinked. The significant type of dimensions is database dimension and cube dimension.
A difference is identified between calculated measure and derived measured based on the time frame of the calculation being performed.
Calculated measure: A measure is called a calculated measure as the calculation is performed when aggregations are created. The values obtained are not stored in a cube. Derived measure: A measure is called a derived measure as the calculation is performed before aggregations are created. These values that are obtained are stored in a cube.
A partition in SSAS is referred to as a physical location of the stored cube data. There will be one partition for every cube by default. Every time a measure group is created, another partition will be created.
Query performance will be high if a partition is used because SSAS will read the data from the partitions that contain answers to the queries. Partitions help in the management of cubes and also stores aggregations.
An attribute hierarchy is a hierarchy or strategy followed and created for every attribute in a dimension by SSAS. An attribute consists of two levels: All level and detail level.Hierarchies are used to organize the attributes which will be converted into user-defined hierarchies to provide a path for the cube. AttributeHierarchyDisplayFolder property helps in identifying and displaying the associate attribute hierarchy to the end-users.
Below are the steps to be followed to create a cube.
SSAS 2008 has come up with some advancements based on the limitations in SSAS 2005. The significant differences between SSAS 2005 and SSAS 2008 are:
A tuple is a slice of data from a cube. It is a combination of one or more members from different dimensions. We can extract the first tuple from the set using a function Set.Item(0).The below query
Named queries are the SQL expressions or queries in the data source view that will act as a table. The main functionality of the named query is to combine the data from one or more tables. Named queries do not require any schematic changes to the source data. The named query is used to confirm the data source.
Data warehouse is an environment that represents the organization’s data. Data warehouse gives a complete view of the enterprise - current and historical information for decision making.
A data mart is defined as the subset of the organization data. It includes explicitly analytical data of a particular subject or department in an organization. A data mart is of three different types:
Difference between data warehouse and data mart: The complete data of an organization or enterprise is called a data warehouse, while data mart is the subject of the complete data.
A property called “ AttributeHierarchyVisble” has to be selected with the value as False in the properties of the attribute.
It is represented as AttributeHierarchyVisible = False.
Yes, there is a possibility to not process an attribute by using a hierarchy property called: AttributeHierarchyEnabled.
Select the property option: “AttributeHierarchyEnabled”=false.
Security is one of the most essential factors to be considered when dealing with confidential data. Security is provided to the cubes by defining roles. The following is the procedure used to enable security to a cube.
Rigid and Flexible relationships are two different types of attribute relationship that represents the relation between various attributes.
The Fact table is defined as the table, which includes the summary or the basic information. It is used to perform business analysis by using Metrics. In technical terms, the Fact table is the table that stores the detailed value for the measure.
Dimension table: The Dimension table includes the hierarchical data of an organization or enterprise which is specific to the business. It consists of the specific name of each member of a dimension. The dimension table uses a key attribute called the primary key column that will contain unique information/ value for each dimension.
There are three types of dimensions called Confirm dimensions, junk dimensions, degenerated dimensions.
Data-driven testing is the methodology where a series of test script containing test cases are executed repeatedly using data sources like Excel spreadsheet, XML file, CSV file, SQL database for input values and the actual output is compared to the expected one in the verification process.
For Example, a Test studio is used for data-driven testing.
Some advantages of data-driven testing are:
Automation testing is a testing methodology where an automation tool is used to execute the test cases suite in order to increase test coverage as well as speed to test execution. Automation testing does not require any human intervention as it executes pre-scripted tests and is capable of reporting and comparing outcomes with previous test runs.
Repeatability, ease of use, accuracy, and greater consistency are some of the advantages of Automation testing.
Some automation testing tools are listed below:
Ultimately, functional testing is aimed at ensuring that software works as specified and meets user expectations. Functional testing may seem simple on its face, but it involves a variety of methods, some of which may be preferred or prioritized over others based on the application and organization. These methods are outlined below:
When performing functional testing, you should follow the steps below as outlined in the following diagram:
The two major types of software testing are functional testing and non-functional testing.
Testing software or applications aims to build a quality product. Functional testing and unit testing are the backbones of software testing.
The term Adhoc Testing, also known as random testing, generally refers to a type of testing that occurs without proper planning or documentation. Adhoc testing has
Ad hoc testing is usually performed randomly without documentation or testing design and it is usually unplanned. Ad hoc Testing does not adhere to any particular structure and is done randomly on any part of the application to identify defects/bugs. When time is limited and exhaustive testing cannot be performed, adhoc testing may be conducted. The tester needs to have a thorough understanding of the system under test in order to conduct effective adhoc testing.
Example: Adhoc testing is cost-effective and can save you a lot of time; one example would be when the client needs the product by 4 PM today, but the development will be finished by 2 PM. With only 2 hours to work with, the developer and tester team can test the system as a whole by taking some random inputs and checking for bugs.
There are two different types of software testing that can be run on the software: monkey testing and adhoc testing. Tests are conducted to ensure that the system is bug-free.
Functional testing involves two distinct test techniques, which can be defined as follows:
Risk-based testing refers to the process of prioritizing tests according to risk, which is used as a basis for developing a Test Strategy. An organization can use risk-based testing (RBT) to prioritize testing software features and functions according to the probability of failure, the importance of the feature, and the impact of a failure. Testing is then performed, starting with the highest risk. Testers who use a risk-based approach are more likely to be aware of risk factors that can lead to project failures.
Following are the main factors to be considered in risk-based testing:
Equivalence Partitioning is also called Equivalence Class Partitioning (ECP) and is a form of black-box testing. In this method, input domain data is divided into equivalence classes (partitions) and test cases are derived using these classes of data. Then, while testing, one sample value is picked from each class. By using this method, test cases are generally reduced to a finite set of testable cases that still cover the maximum requirements.
A technique of equivalence partitioning is applied only when input data values can be divided into ranges. For each range partition, only one condition will be tested, assuming that all other conditions within the same partition will behave similarly.
Example: Let's say you have an input field that can accept only percentage values between 50 and 90%. In that case, it would be pointless to write thousands of test cases for 50-90 valid input numbers, plus others for invalid data.
The Equivalence Partitioning method outlined above can be used to divide test cases into three classes of input data. Each test case represents a class. As you can see, in the example provided above, we were able to split our test cases into three equivalence classes (can be more) composed of invalid and valid inputs.
Test cases for input box accepting percentages between 50 and 90 using equivalence partitioning:
The boundary value analysis is a technique for testing the boundary value of an equivalence class partition. A boundary value analysis identifies errors at the boundaries, opposed to within ranges in equivalence partitioning.
Example: Consider an input field in an application that can accept a minimum of 5 characters and a maximum of 10 characters. We were able to split our test cases into three equivalence classes composed of invalid and valid input. Then 5-10 is considered as valid and 10 is considered as invalid.
Test cases for application input field accepting numbers between 5-10 using boundary value analysis:
UFT (Unified Functional Testing), also known as QTP (QuickTest Professional), is an automated functional testing tool that helps testers to conduct automated tests to identify errors, defects, and other deviations from the expected behaviour of a software application.
A data-driven testing approach is a method of functional testing where a series of test scripts are executed repeatedly with the use of data sources like Excel, CSV files, Spreadsheets, XML files, and SQL databases. Data sources like these are used as inputs for generating output. Next, the output is compared to what was expected to verify the system or software.
A data-driven approach is preferable because testers often have multiple data sets for a single test, and it can be time-consuming to create individual tests for each data set. By using data-driven testing, data and test scripts can be separated, and the same test script can be run for different combinations of input data, resulting in efficient testing results.
Example: Let's assume we want to test the login system with 100 different data sets and multiple input fields. We can have the below three approaches:
The first two scenarios listed are arduous and time-consuming. As a result, it would be best to utilize the third approach (data-driven testing).
Smoke vs Sanity Testing: Learn More
The Requirement Traceability Matrix, or RTM, is a tool that keeps track of requirements as a system or application progress through a testing process. As soon as the requirements document is received, the RTM is created and maintained until the system or application is released. RTM is used to ensure that all requirements in the requirements specification have been implemented before the release of the system.
Each tester should be responsible for understanding the client's requirements and ensuring that the output product is error-free. To accomplish this goal, the QA team must create test cases after thoroughly analyzing the requirements. As a result, the client's software requirements need to be divided further into different scenarios and finally into test cases. Each of these cases needs to be tested separately.
The simplest way is to trace the requirements to their corresponding test cases and scenarios and it is termed as ‘Requirement Traceability Matrix.’ Typically, the traceability matrix is a worksheet that contains the requirements and their associated test cases and scenarios as well as the current status of those tests, whether they were successfully executed or not. This will aid the testing team in understanding the extent to which testing has been done for the specific product.
Stress Testing is a form of performance testing where the application is bound to go through exertion or stress i.e. execution of application above the threshold of the break to determine the point where the application crashes. This condition usually arises when there are too many users and too much of data.Stress testing also verifies the application recovery when the workload is reduced.
Load Testing is a form of performance testing where the application is executed above various load levels to monitor the peak performance of the server, response time, server throughput, etc. Through load testing process stability, performance and integrity of the application are determined under concurrent system load.
Answer: Volume testing is a form of performance testing which determines the performance levels of the server throughput and response time when concurrent users, as well as large data load from the database, are put onto the system/application under tests.
There are two different test techniques that are used in functional testing.
They can be defined as below:
Exploratory testing means testing or exploring the application without following any schedules or procedures. While performing exploratory testing, testers do not follow any pattern and use their out of box thinking and diverse ideas to see how the application performs.
Following this process covers even the smallest part of the application and helps in finding more issues/bugs than in the normal test case testing process.
Exploratory testing is usually performed in cases when:
Enlisted below are the possible scenarios that can be performed to fully test the login feature of any application:
There are few other possible scenarios as well which can be tested.
Accessibility testing is a form of usability testing where testing is performed to ensure that the application can be easily handled by people with disabilities like hearing, color blindness, low visibility etc. In today’s scenario, the web has acquired the major place in our life in the form of e-commerce sites, e-learning, e-payments, etc.
Thus in order to grow better in life, everyone should be able to be a part of technology especially people with some disabilities.
Enlisted below are a few types of software which help and assist people with disabilities to use technology:
Looking to grow your career as an SQL Administrator? Find out more on Database Administrator Salary.
Adhoc testing, usually known as random testing is a form of testing which does not follow any test case or requirement of the application. Adhoc testing is basically an unplanned activity where any part of the application is randomly checked to find defects.
In such cases, the defects encountered are very difficult to reproduce as no planned test cases are followed. Adhoc testing is usually performed when there is a limited time to perform elaborative testing.
Learn more about SQL Developer Salary and Benefits: https://www.janbasktraining.com/blog/sql-developer-salary/
Equivalence partitioning also known as equivalence class partitioning is a form of black-box testing where input data is being divided into data classes. This process is done in order to reduce the number of test cases, but still covering the maximum requirement.
Equivalence partitioning technique is applied where input data values can be divided into ranges. The range of the input values is defined in such a way that only one condition from each range partition is to be tested assuming that all the other conditions of the same partition will behave the same for the software.
For Example: To identify the rate of interest as per the balance in the account, we can identify the range of balance amount in the account that earn a different rate of interest.
Boundary value analysis method checks the boundary values of Equivalence class partitions. Boundary value analysis is basically a testing technique which identifies the errors at the boundaries rather than within the range values.
Defect Severity is defined by the level or the degree of impact by the defect on the application under test. Higher the severity of the defect, the more is the impact on the application.
Following are the 4 classes in which a defect severity is categorized:
Defect priority defines the order in which the defect should be resolved first i.e. the higher the priority of the defect implies that the application is unusable or stuck at some point and the defect should be resolved as soon as possible.
Following are the 3 classes in which a defect priority is defined:
Smoke testing is performed on the application after receiving the build. Tester usually tests for the critical path and not the functionality in deep to make sure, whether the build is to be accepted for further testing or to be rejected in case of broken application.
A smoke checklist usually contains the critical path of the application without which an application is blocked.
Find out How to grow your career as an SSIS developer .
Sanity testing is performed after receiving the build to check the new functionality/defects to be fixed. In this form of testing the goal is to check the functionality roughly as expected and determine whether the bug is fixed and also the effect of the fixed bug on the application under test.
There is no point in accepting the build by the tester and wasting time if Sanity testing fails.
Requirement Traceability Matrix (RTM) is a tool to keep a track of requirement coverage over the process of testing.
In RTM, all requirements are categorized as their development in course of sprint and their respective ids (new feature implementation/ enhancement/ previous issues, etc) are maintained for keeping a track that everything mentioned in the requirement document has been implemented before the release of the product.
RTM is created as soon as the requirement document is received and is maintained until the release of the product.
By Risk-based testing of a project, it is not just to deliver a project risk-free but the main aim of risk-based testing is to achieve the project outcome by carrying out best practices of risk management.
The major factors to be considered in Risk-based testing are as follows:
We hope you found these ssas interview questions and answers useful. Overall, the interview questions for ssas is designed to test your knowledge of the software and your ability to think critically about data. The ssas interview questions asked will vary depending on the position you are interviewing for, but they will all be focused on assessing your skillset. If you can demonstrate a strong understanding of SSAS and how it can be used to solve real-world problems, then you should have no trouble impressing potential employers and landing the job you want.
After you have gone through these questions, the next step is to enroll in an online SQL training course and begin learning basic database skills right immediately. If you're not convinced about the advantages of online training, sign up for a free demo class to see our online learning environment and hear about our mentors who can help you advance your career.
Your Next Read:
A dynamic, highly professional, and a global online training course provider committed to propelling the next generation of technology learners with a whole new way of training experience.
MS SQL Server
Receive Latest Materials and Offers on SQL Server Course