Resolution: Error "Could not load file or assembly 'file://\\server\path\file.dll' or one of its dependencies. (0x80131515)

Issue

Call to a .NET Framework assembly on a network folder fails with error:

"Could not load file or assembly 'file://\\server\path\file.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)

Troubleshooting

In this particular case, I wanted to call an assembly with tests which had been deployed using Team Build to a shared folder, then use MSTest in the command line to run them.

MSTest behaved differently depending on whether it is being called from a Windows 2008 R2 Standard server or Windows 8 Client. Using the following command line:

MSTest /testcontainer:\\server\path\file.dll /detail:debugtrace /detail:traceinfo

  • From Windows 8 client, user as local administrator: command works as expected. It loads the tests and executes them.
  • From Windows 2008 R2 Standard, user as a local administrator: command fails with the following message:

Could not load file or assembly 'file://\\server\path\file.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)

Further troubleshooting using 1) a mapped drive 2) PowerShell 3) /testcontainer parameter between quotes show that all these would also fail under Windows 2008 R2 Standard.

The error “Could not load file or assembly” (0x80131515) is a catch-all error. For instance, it is also reported when the exe or dll was downloaded from an unsafe zone. This is fixed by right-clicking the assembly and choosing “Unlock” from the General tab. Sometimes the “Unlock” option won’t show, and in this case it wasn’t.

Blocking is done based on a Zone Identifier alternate stream that is added to a file when it is copied from the Internet. To validate that the file is actually blocked, display this Zone Identifier using the following command (note the direction of the “<” sign – the opposite one would erase the file):

more < file.dll:Zone.Identifer

If ZoneId = 3 or 4, your file is blocked. In our case, the files show no alternate Zone.Identifier stream.

Since copying the files locally to a folder in the Window 2008 R2 Standard server allowed MSTest to execute the tests, this showed that the issue was constrained to some security policy differences between Windows 8 and Windows 2008 R2 Standard. This also confirmed that the files were not blocked.

Next step I looked into the .NET Framework security policy configuration. Using the instructions from this Microsoft blog post I modified the CAS policy settings using the following command prompt:

CasPol.exe -m -ag 1.2 -url file://\\server\path\file\* FullTrust

This added the following record into the CAS policy (CasPol.exe -l):

   1.2.  Zone - Intranet: LocalIntranet
      1.2.1.  All code: Same site Web
      1.2.2.  All code: Same directory FileIO - 'Read, PathDiscovery'
      1.2.3.  Url - file://\\server\path\file\*: FullTrust

This did not resolve the issue as expected, but the following error started to be reported on the Event log:

Log Name:      Application
Source:        VSTTExecution
Description:[…]
(MSTest, PID 2000, Thread 1) AssemblyEnumerator.EnumerateAssembly threw exception: System.IO.FileLoadException: Could not load file or assembly 'file://\\server\path\file.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)

File name: 'file://\\server\path\file.dll' ---> System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information.

By following the link recommended in the log, I got more information on the issue and how to solve it with the configuration switch loadFromRemoteSources.

From MSDN documentation:

“In the .NET Framework version 3.5 and earlier versions, if you loaded an assembly from a remote location, the assembly would run partially trusted with a grant set that depended on the zone in which it was loaded. For example, if you loaded an assembly from a website, it was loaded into the Internet zone and granted the Internet permission set. In other words, it executed in an Internet sandbox. If you try to run that assembly in the .NET Framework version 4 and later versions, an exception is thrown; you must either explicitly create a sandbox for the assembly (see How to: Run Partially Trusted Code in a Sandbox), or run it in full trust.

The <loadFromRemoteSources> element lets you specify that the assemblies that would have run partially trusted in earlier versions of the .NET Framework are to be run fully trusted in the .NET Framework 4 and later versions. By default, remote assemblies do not run in the .NET Framework 4 and later […]. If you set enabled to true, remote applications are granted full trust.

If <loadFromRemoteSources> enabled is not set to true, an exception is thrown under the following conditions:

  • The sandboxing behavior of the current domain is different from its behavior in the .NET Framework 3.5. This requires CAS policy to be disabled, and the current domain not to be sandboxed.
  • The assembly being loaded is not from the MyComputer zone.”

And, in the same article:

In the .NET Framework 4.5, assemblies on local network shares are run as full trust by default; you do not have to enable the<loadFromRemoteSources> element.

This part of the documentation also shows why it worked on Windows 8: .NET Framework 4.5 is the default for Windows 8.

For validation purposes, I originally applied the recommended setting to machine.config, which is too wide. Later I tested with just MSTest.exe.config and it also worked, which is a smaller recommended scope.

Resolution

[Verified] Add the following entry to the MSTest.exe.config file (at C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE):

<configuration>
   <runtime>
      <loadFromRemoteSources enabled="true"/>
   </runtime>
</configuration>

or

[According to documentation] Use .NET Framework 4.5.

Gartner: Visual Studio and TFS in the Leaders quadrant again

Brian Harry highlighted in his latest post how Gartner had included Visual Studio and TFS in the Leaders quadrant for the second year in a row! Here is a picture of it:

image

I had never heard of Soasta/Tricentis/Test Plant/Original Software, I will have to check these ones out.

This report, along with the other one on ADLM tooling makes for an interesting analysis: Microsoft has an edge over ADLM quadrant leaders in that some of the others (Rally for instance) do not not even show up here. I will come back to that later.

Understanding new trends on source control format

Last month I went to a nice presentation at the Austin TFS User Group on migration from TFVC format to TFS Git, using Git-TFS, a very useful one, the kind that makes you want to research more after it.

However one of the first slides of the talk did a comparison using Google trends which was used as a justification to moving to Git, because according to it, TFVC was going nowhere, whereas Git was exploding in adoption (notice that the numbers are not absolute, that is they are normalized. For instance, if you have just TFS and Mercurial, the number for TFS will be different).

clip_image001[6]

 

 

I had in the had past previous encounters with skewed statistics, so I waited until the Q&A at the end to ask a few questions about it.

I pointed out that since we were seeing data that had been filtered through Google search, it did not take into account that most Microsoft tools users however would first go to the MSDN site first and search from there (or just press Help from Team Explorer), so some of the millions of people who have MSDN and use TFVC are not being represented.

Also, git has many client providers, such as Xcode to Tortoise-git. Most of them are also OSS with “best-efforts” support, meaning “if you have an issue, search in the Internet for a solution”. So in a way this trend curve also reflects a bit of where the information sources are for git related topics in general, it is a bit fragmented so you need to go through a search engine to have it all collected for research.

The most important data point I observed is that the TFVC curve (numbers aside) were pretty much stable. So “where is git interest growth coming from?” since the TFVC interest curve seemed pretty stable.

I recalled that around 2001 I had seen something similar about Linux adoption in the desktop: some pointed how it was growing so fast that it would overtake Windows soon. This growth curve was similar, steady but slightly rising for Windows, and steeper for Linux, showing it overtaking Windows in the forecast.

Time showed that the market shares were not changing that much, so where was the Linux growth coming from? Sun took too late to realize that it was their own Unix, Solaris (and for that matter, all of Unix variants) that were being cannibalized as people moved to Linux. Later Sun made Solaris into open source but it was already too late: most Unix users had converted to Linux.

If we could use “searches”  as a proxy to “interest”, and that as a proxy for market share: with git growing and TFVC stable, what was the “Solaris” equivalent that was being replaced?

After I explained my point of view, the presenter went online and added Mercurial in the graph… there is surely a downwards trend on Mercurial queries, but that does not explain how git searches were growing:

clip_image002

 

I mentioned Subversion to him but we didn’t have time to try it out so I continued from home.

I then added Subversion, listed under “Apache Subversion Revision Control System” to the graph, and voilá, the puzzle was solved: the new trend graph, confirmed that the open source community (and for that matter, Microsoft TFS users as well) is readjusting its preferences. Git is now as popular as Subversion was in the 2007-2009 time frame. The growth of interest in git is pretty much explained by the winding down of searches on Subversion and Mercurial, and probably some from TFVC now migrated to TFS git:

clip_image003

I then continued my research looking for the development of Interest in other source control systems, starting by the all-time grandfather of many, CVS. Back-extrapolating its curve, you can see that it was very popular, but was itself run over by Subversion in the 2005 time frame:

clip_image004

I then tried out with others: Rational ClearCase, Rational Jazz Source Control, Perforce. Out of these, only Perforce showed a small but steady curve. ClearCase is now reduced to just a trickle, and Jazz Source Control did not even show up. Finally I had to see if any record of Visual Source Safe still existed after 2005, and as expected it dwindled after 2005:

clip_image005

As a final experiment I tried the “Forecast” feature which seems to trace a simple extrapolation based on the data so far (forecast starts at ABC points below). The extrapolation confirmed the steadiness of the TFVC curve so far, and the ongoing dwindling of Subversion as it gets overtaken by git as all other open source version control systems:

clip_image006

So in conclusion:

- Git will become to open source version control systems what Linux is to open source Unix-like systems;

- TFVC will remain stable for the foreseeable future, with users of TFS-git adding to the number of git adopters;

and

- You can use statistics to justify any point of view, so be on the lookout for any inadvertently skewed perspectives;

- Do not just accept the data, think about it too – logic will help you in finding the hidden aspects (the “Solaris”) of the question;

If you have read so far and have a different perspective, please let me know what your thoughts are on this.

Scaled Agile Framework: Using TFS to support epics, release trains, and multiple backlogs whitepaper

The SAFE whitepaper + download was just launched today. See the announcement by Greg Boer at the MSDN ALM blog.

This was the result of lots of hours of internal contribution by the ALM Rangers.

The whitepaper provides both a high level view of how SAFe is realized using TFS, as well as detailed configuration/customization details.

In addition to the whitepaper, this release includes a download of the Visual Studio out-of-box process templates, with SAFe related customizations already made: Team Foundation Server 2013 Process Template Samples - Support for Scaled Agile Framework (SAFe).

This pretty much addresses Gartner’s concerns in their latest ADLM state of the industry report, showing a  quick turnaround from Microsoft to get these addressed.

Issue: updating Field AllowedValues that differ only by casing

Issue

Say you created a field and by mistake, there was a typo in one of the allowed values:

<FieldDefinition name="FieldToTestLowerUpperCase" refname="Custom.FieldToTestLowerUpperCase" type="String">
   <ALLOWEDVALUES expanditems="true">
     <LISTITEM value="Out of scope" />
     <LISTITEM value="Value 1" />
     <LISTITEM value="Value 2" />
   </ALLOWEDVALUES>
</FieldDefinition>

What you really wanted was “Out of Scope”, not “Out of scope”:

<FieldDefinition name="FieldToTestLowerUpperCase" refname="Custom.FieldToTestLowerUpperCase" type="String">
   <ALLOWEDVALUES expanditems="true">
     <LISTITEM value="Out of Scope" />
     <LISTITEM value="Value 1" />
     <LISTITEM value="Value 2" />
   </ALLOWEDVALUES>
</FieldDefinition>

Using Process Editor, even if you modify and republish the value, it does not change the casing.

Troubleshooting

I have been able to replicate your scenario with Process Tools Editor within VS 2013 Update 3 and TFS 12.0.30723.0 (Tfs2013.Update3).

I deleted the field using Process Editor to take it out of a custom Task, and then deleted it in the command line with witadmin:

witadmin deletefield

/collection:http://<tfsserver>:8080/tfs/<yourcollection> /n:Custom.FieldToTestLowerUpperCase

then re-added it using Process Editor with the right case (“Out of Scope”) and told it to rebuild the cache (“witadmin rebuildcache”). It still did not work, it still kept the same value.

I applied a simple change (add an extra space between “of” and “Scope”, saved it) and the new one had the uppercase plus the extra space (“Out of  Scope”). Then I modified the new field to use just a single space, rebuilt the cached, but it returned to using lowercase (“Out of scope”).

To see whether it was a bug with Process Editor, I did all operations using just witadmin in a command line prompt. It still did not work: even after an update, I would retrieve the work item definition and it would show the word “scope” in lowercase.

This value was cached somewhere, and not being able to update it is definitely a bug. By looking into the Fields table I confirmed that nothing really is deleted, only marked as deleted, and most likely it is reused when the value is reinserted. In addition, when a field AllowedValues is changed, the Import method (either using Process Editor, witadmin or the API) does not consider casing when checking whether the value needs to be updated.

Workaround

I found the “Out of scope” value in the TFS Constants table (within the collection database):

SELECT

PartitionId, ConstID, DomainPart, fInTrustedDomain, NamePart, DisplayPart, String, ChangerID, AddedDate, RemovedDate, SID, Cachestamp, ProjectID, TeamFoundationId, fReferenced

FROM Constants

WHERE (DisplayPart = 'Out of scope')

ORDER BY DisplayPart

Next I manually updated it to “Out of Scope”, and refreshed. This fixed the issue.

ATTENTION: Do this at your own risk, as modifying TFS tables directly is neither recommended nor supported, and might put your database in an unsupported state. I tested this on a sample TFS installation, which is not in production.

I only provided this workaround as a last resort and because it was a simple enough update of a string value. A better, supported path would be to open a case with Microsoft support using your MSDN incidents, and have it escalated to the Product Team as a Bug (I might also open a bug with Connect later, and will post the link here).

A few pointers on how to use Delphi applications with Coded UI

To use Delphi-based UIs with Coded UI tests you would need to implement the MSAA interface for each component you would want to use/have it visible with Coded UI.  Example implementations:

-          TEdit:

-          TreeView

The Coded UI extensibility framework works mostly with MSAA compliant applications (http://msdn.microsoft.com/en-us/library/dd380742.aspx). However, if you  can’t get the Delphi source code and enable MSAA, you will have to do with the plain Windows Win32 support (http://msdn.microsoft.com/en-us/library/dd380742.aspx ).

Is it possible to build a plug-in or add-on in .NET using Coded UI extensibility for identifying Delphi (VCL) UI controls native properties (like id, control name)? As mentioned before, it is the UI control itself that has to expose MSAA compliant properties to be visible, that is, the TEdit or TForm needs to implement it. However the documentation on how to used CodedUI with Silverlight states the following:

“To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.

If I understand this correctly, it might be possible to do the same for Delphi CLR .NET applications at the assembly level (I have not seen any reference implementation on how to do this though). For applications compiled to native code you would have to go to the source as explained above.

Issue: Failed to push new glyph for <file-excluded-by-gitignore> Return code from SccGlyphChanged was -2147024809

Issue

Visual Studio 2013 Ultimate Update 3 RTM with Microsoft git provider returns error "Failed to push new glyph for <file-excluded-by-gitignore> Return code from SccGlyphChanged was -2147024809." in Output window.

Details

Issue occurs every time an a file is modified in an editor. The focus switches to the Output window with label "Source Control  - Git". An error message as above is returned for each file excluded by patterns in .gitignore.

Analysis

This an issue with the way that Solution Explorer is interacting with the notifications from the git source control provider.

A glyph is a source control UI element, that is all those little symbols on the left side of the file in Solution Explorer, as explained in implementing a source control provider. This post talks at length about what is going on in a general fashion, including explaining what the error message number means.

What is missing is the connection with .gitignore patterns. This might point to some logic error in the code that handles the background processing that updates the Solution Explorer UI every time a file source control status is modified by editing it in place.

User was using Update 3 RC prior to installing Update 3 RTM, therefore this might be a left over from RC.

I looked into another related issue with .gitignore that when you have a specific exclusion rule (say "/TFS"), it does not process the file (with a name that starts with "TFS"), plus the Solution Explorer glyph changes to that of a committed file. The workaround for this was to take out the .gitignore rules. Another would be to through suppressing the Output window activation and filtering out messages related to this error by using its automation interface.

All points to an issue with the way that the git provider interacts with the Visual Studio UI.

Resolution

I have confirmed with Microsoft that this is a bug and that it has been fixed in an upcoming version.

Dynamically creating or modifying controls in a TFS form

TFS forms can only be modified at design time.

The TFS forms engine is capable of specifying different layouts for different targets, which already tells us that it is a subset of the rendering engines for those targets, and as far as I know, with less capabilities. One of the restrictions is exactly that forms can only be modified at design time.

As a workaround, at the bottom of the page on how to specify work item form controls, there is a link to an article on how to implement a custom control using Winforms. It might be possible to create a custom control with multiple drop down boxes which are dynamically displayed; I have not yet tried that.

You can get better samples on how to implement a custom control in the Custom Controls for TFS Work Item Tracking project at CodePlex. The challenge though will be to also generate a web implementation for the same control, otherwise your work items will be visible only from Team Explorer.

Issue Workaround: When launching a test from MTM, Test Runner does not launch

Issue

When launching a test from MTM, either manual or automated, it would get the message, MTM does not launch Test Runner and fails to do anything. The following error message was added to the Event Log after failure:

    <Provider Name="VSTTExecution" />

    <Data>(mtm.exe, PID 16168, Thread 1) Exception: System.IO.FileNotFoundException

                Message: Could not load file or assembly 'Microsoft.VisualStudio.TestTools.UITest.WindowsStoreUtility, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.

Troubleshooting

We tried running it from another computer to isolate the Visual Studio Ultimate as the issue. It worked on another computer with Visual Studio Ultimate 2013 Update 3. The failing computer had Visual Studio Ultimate Update 2.

Resolution/Workaround

Root cause was not determined. Installing Update 3 fixed it.

SAFe: one option to scale Agile

The last year has been marked by the steady adoption of SAFe (Scaled Agile Framework) in companies that, although already development powerhouses, still had been struggling to transform themselves into Agile development shops. So SAFe has played an increasingly important role in those companies transitioning from more traditional SDLCs to Agile based ones.

So what is SAFe? It is the evolution of Dean Leffingwell's long time work in methodologies in general, starting with RUP, tempered with his practical experience in bringing Agile and Lean to many companies over the last ten years. According to its website, "The Scaled Agile Framework is an interactive knowledge base for implementing Agile practices at enterprise scale." As such is has a nicely crafted big picture with hundreds of icons you can drill down to and learn more about a specific topic, sometime going even deeper. It reminded me of the richness you get from the RUP documentation, albeit all of it related to current and relevant Agile topics.

image

The navigation is a bit bumpy because in order to protect their IP, the authors made it difficult to see more than one topic at a time (no “open link in new tab option”), and you can’t select text either, so if you are quoting them you will need to type it all. Other than that it can be seen as a rich “map” to all things Agile in an enterprise. The main references of SAFe are Leffingwell's books (both available at Safari):

How is SAFe different from Scrum? The short answer is that it encompasses Scrum. You could say that Scrum targets the team, and that SAFe targets the enterprise. Though based on Lean and Agile principles at the micro, team level, it also addresses architecture, integration, funding, governance and roles at the macro or enterprise level.

And here is what is relevant to a Business Analyst: it clearly provides a way to structure and manage requirements at the enterprise level. It starts by reasserting what has become pretty much the standard best practice in requirements management hierarchy levels, Portfolio, Program and Team:

· Portfolio: Business and Architecture epics, epics span releases

· Program: Features fit releases

· Team: Stories fit in iterations

The novelty of SAFe stems from providing practical solutions to some of the “elephant-in-the-room” problems that have prevented scaling up requirements management in many companies. Among these, three caught my attention:

· Architecture runways: this is something obvious for the technical team, but SAFe makes it explicit by adding Architecture epics as an item in the Portfolio backlog. An Architectural Runway “is the extant technical infrastructure (instantiated in code) necessary to support the implementation of upcoming features without excessive, delay-inducing, redesign.” By having Architecture topics explicit, businesses now become aware of the hidden part of the iceberg needed to implement business features: a plane can’t safely fly without a runaway.

· Agile Release Train: it is “a long-lived team of agile teams, typically consisting of 50-125 individuals, that serves as the program-level value delivery mechanism in SAFe. “ This is the equivalent of a Scrum team but at the program or release level. By having this concept clearly understood, business becomes aware about the need to support using a common team sprint cadence, and “teams are aligned to a common mission via a single Program Backlog”. You could say that this idea is similar to Scrum-of-Scrums concept, but it goes beyond in unifying the teams around the idea of releasing Potentially Shippable Increments (PSIs) as a unit, and not just synchronizing the efforts of separate Scrum teams at the backlog level but still not coordinating the delivery of working software, which is after all the measure of progress for Agile teams.

· Investment Themes: by adding this to a portfolio backlog, and mapping it directly downstream to its ramifications at program and team level, it ensures that what is being worked on has been budgeted, and that execution is tied to strategy. Business becomes aware of the need to prioritize and the repercussions downstream of short-changing initiatives that might affect its financial future.

Reception

SAFe is not without its critics. It has been bashed by both Lean and Agile/Scrum champions such as David Anderson, Ken Schwaber, and Mike Cohn. However it seems that those initial reactions "throw the baby out with the water", to use an old expression. There is definitely value in the framework, but only if considered within an understanding of how ideas are adopted and used within the enterprise.

For the Scrum practitioner, a minor annoyance is its confusing definition of Scrum: Leffingwell invented “SAFe ScrumXP” as a combination of Scrum project management techniques with XP engineering practices. This separation stems from a backwards pseudo-definition of Scrum from 15 years ago, when the perception was of Scrum as being solely a project management framework. At the time Mike Beedle coined the name “Xbreed” (later “Enterprise Agile Process”) to mean exactly Scrum + XP.

“Xbreed” didn’t catch because eventually everyone started to use the word “Scrum” to mean engineering practices as well (notice how Beedle already says that here even for XBreed), and later Scrum.org and ScrumAlliance made it official by adding the “Professional Scrum Developer” and “Certified Scrum Developer” courses focused exactly on teaching engineering practices alongside the Scrum project management framework.

Finally, among the strongest criticisms is that SAFe does not conform to a well-known process adoption best practice popularized by Alistair Cockburn: "stretch to fit". Like RUP, SAFe needs to be tailored in size, and many might be tempted to adopt everything when in doubt about what to do. SAFe can then be used at the Shu and maybe a bit at the Ha levels (as referred by Cockburn), but for experienced Agile practitioners, both in the technical and product management sides of the business, SAFe will appear overwhelming and verbose, and might get in the way as feeling over prescriptive. However, for companies coming out traditional SDLC middle ages, SAFe can feel like the map of a gold mine. It gives to the uninitiated in the Agile wisdom a sense of direction and security to finally start experimenting on how to get out of the Waterfall corner many businesses brought themselves to.

Some references pro and against:

Method Wars: Scrum vs SAFe, by Ian Mitchell

“SAFe has gained traction [with big companies] not in spite of poor agile credentials, but rather because of them.”

Has SAFe Cracked the Large Agile Adoption Nut, by InfoQ

“However, not all in the community think SAFe is a good idea. In fact, many have a strong negative reaction”

Controversy around SAFe, DAD and Enterprise Scrum, by Elizabeth Woodward

“SAFe has been empirically derived from addressing problems as teams scale--lessons learned over time--and it continues to evolve.”

unSAFe at any speed, by Ken Schwaber

“The boys from RUP (Rational Unified Process) are back.”

Kanban – the anti-SAFe for almost a decade, by David Anderson

“SAFe appears to collect together a number of techniques from software development processes from the 1990s and 2000s”

Calendar

<<  October 2017  >>
MonTueWedThuFriSatSun
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar

Month List