Does poor IT performance really impact a student’s final grade? Interesting question; of course there are many variables, but the bottom line is, in one way or another, it can do.
It’s important to consider that given how reliant higher education is on IT these days to underpin learning and research, there are many ways that unreliable or poorly performing IT can impact this process and influence the end result for students either directly or by impeding university staff and by extension the students.
For example, the impact that poor IT performance has on productivity due to an inability to access a certain system, can inhibit teaching staff with the delivery of their lessons. Even recurring ‘technical troubles’ can pose significant interruptions and delays for students and staff alike.
We’ll explore this in more depth below based on problems we’ve seen ourselves and helped universities overcome, but also feedback from industry leaders and first-hand research from higher education IT users - the students.
Quickly, lets look at what's covered on this page:
Common IT performance challenges that impact students learning
What can be done to tackle these challenges
Common university IT challenges that impact a student’s ability to learn
Student productivity syndrome
Students are at university to get their degrees, build social relationships, grow and ultimately find out who they want to be. It’s a very formative time for them and doing well in their studies is a priority (at least for most of them). However, even the most enthusiastic students are faced with many distractions. When putting time aside to study, write coursework or research they tend to be in the right space to be productive.
But what if this ‘focus time’ is interrupted, and on a semi-regular basis?
Typical interruptions could include: intermittent Wi-Fi access, learning portals running slowly, student websites and portals going down, ‘Blackboard’ latency...you get the picture.
Sure, these issues are bound to arise periodically and the onus is on the student to ensure they put aside significant time to learn their subjects and submit all their work to the best of their ability.
However on the other hand, if these interruptions are frequent then it will, without a doubt, impact how productive that student is going to be. Is the onus not on the university to ensure that students have the tools and information they need to succeed?
IT teams that are ‘putting out fires’ can’t execute strategic plans
Support tickets, patching, root cause analysis, service requests - the list goes on. These ‘things’ that pop up during the day of an IT professional working in higher education take time, the amount of time these ‘things’ take depends on a number of factors:
- The complexity of the environment they’re working in
- The process in which these issues are firstly directed and then dealt with
- The tools they have at their disposal
- Accurate visibility of what’s actually happening on their IT estate
However, when these fires are being fought, who’s driving strategy?
Yes, the vision to have an all singing and dancing network designed with optimum user experience in mind, opening doors for you as you approach and reminding students that they should leave for their class right now (if they also want to grab their regular coffee on the way), is a little far in the distance for most universities, but…
The prerequisites to this level of capability help build the foundations for solid and secure delivery of educational IT as a service. They take a step towards removing need for huge manual intervention, or the need for a fine-toothed comb to hazard a guess at the root cause of problems. Investing in the right IT initiatives can help relieve IT staff of fire fighting and enable them to execute strategy.
Complaining without explaining
As you can probably relate, there's no shortage of incoming dialogue from users announcing the problems they’re experiencing. “It’s running slow”, “It’s not working”, “Oh… I’m not sure."
As much as user feedback is crucial to improving a service, when it’s vague and nonspecific, it’s not actually very helpful. Understanding the problem is difficult enough before trying to diagnose the scale or cause (e.g. is it just this user?). What is the actual impact of the problem?
The problem with this limited information is that it's difficult to understand which direction to begin troubleshooting, is it the device? Network? Application? Wi-fi? Without visibility into these various areas and their metrics, it’s a nearly impossible task to pair the vaguely described issue with what's happening in real-time and begin to identify the root cause.
If you don’t have a specialist, you’re probably not getting the most out of it
We’ve been pulled into countless situations where we see a common theme:
- Organisation buys new tech/solution
- Vendor sets it up and implements
- The challenge they were trying to overcome is either still present or not solved to the degree they anticipated
The problem here either relates to:
- Not understanding the original problem
- The technology purchased has the capability to achieve desired results but not in its default configuration.
It’s not uncommon to have vendors pull the solution out the box, install and job done, but to derive the true capability out of a solution, it often requires a specialist who understands the bells and whistles. You wouldn’t give an all singing, all dancing firewall to a support engineer and be confident your enterprise is protected in every possible way.
This begs the question, if you’re not getting the full capability out of a solution, what's the impact? It’s not only a really poor ROI, it also takes a lot of ongoing effort to manage and sucks more man hours in trying to extract what you need.
At this point, it’s worth comparing the value of services versus point solutions. Unless you have the in-house skills to extract the most from a solution, or even understand the full art of the possible, then the benefits of outsourcing to a team of specialists using consistently up to date technology could ultimately be a more attractive option.
When you make a change, does it have a positive impact, and how do you know?
It worked, because everything is amazing! It didn’t work because it broke X, Y or Z!
Apart from the obvious doom or glory answer to this question, how do you monitor the impact of change?
Given that student incident reports can lack clarity and often aren’t particularly timely, paired with the fact that if performance is generally poor anyway, they may well have just accepted that “Uni IT is so slow”.
Monitoring only certain aspects of a change can result in misguided decision-making. Actually getting insight into the impact of changes across the estate and into vital metrics such as end-user experience is key to understanding the success of a change. This data can then be compared to the pre-change state of things, therefore defining any actions that need to be taken.
Without clear visibility across systems, apps and users, it’s impossible to understand the actual impact to the staff/students.
How can these challenges be tackled?
We’ve just pointed out some specific challenges that IT teams in higher education face but you probably already know this, so we want to be a bit more helpful that that, so here are some suggestions to help alleviate these challenges and put the power back in the hands of the IT teams.
Digital experience monitoring (DEM)
End user experience monitoring or EUX has many names but essentially, it’s the monitoring of how IT is delivered and consumed across your user base, be that staff or student and all the systems inbetween.
What is it?
Monitoring of key metrics throughout your network from data centre to users, to the Cloud.
Why is it required?
Firstly, it gives visibility into connections and transaction times across your estate, giving insight into their performance, highlighting any issues and processes that are causing problems. It also gives this visibility in areas outside of your control e.g. Wi-Fi issues in students' homes. This speeds up root-cause analysis by giving a clear objective direction to investigation and improving understanding of the full impact of the problem.
Secondly, it provides a baseline of where your IT ‘normally’ performs. This is useful for continuous proactive improvement, allowing IT teams to tackle their problem areas and consistently improve service levels for staff and students, as well as highlighting any low-hanging fruit for quick gains.
Baselining also provides a great before and after to monitor the impact of change, giving IT teams and leadership the data to see the impact and demonstrate ROI.
Application performance monitoring (APM)
This is the practice of tracking key software application performance metrics using monitoring software and telemetry data.
What is it?
APM tools/services can provide an overview of server interactions and can drill deep into the application specifics (web page elements, database calls, Java method performance etc.) to help pinpoint what is impacting application performance.
Why is it required?
Whilst DEM can point you in the right direction when it comes to troubleshooting, it doesn’t give you the granular details that APM provides. When you’re having difficulty understanding the performance of a specific application then you need to get deep inside the tin in order to get the level of insight needed to achieve actual root-cause analysis.
APM provides the data to make decisions based on knowledge, not ‘best guess’. The amount of budget that is spent based on incomplete information is quite frightening. These decisions are best guesses and are often made to treat symptoms, not solve root-causes, leaving them ineffective and, frankly, a drain on funds.
Review how the network is managed
It’s likely that through organic growth, your methods of network management are being stretched - software-defined solutions could hold the answer.
What are software-defined solutions?
Software-Defined Networking (SDN) is an approach to networking that uses software-based controllers or application programming interfaces (APIs) to communicate with underlying hardware infrastructure and direct traffic on a network.
Why are they required?
Modern networks weren’t built for the modern, cloud first, SaaS, frequent rapid-change, distributed era. For a long time, traditional WAN architecture has been complex and static, relying on manual intervention for configuration and not designed for Cloud operation.
A rapid increase in remote working and mobile application usage compounds the pressure, not only growing the size of the problem but also the number of connections to secure.
Complexity = congestion = latency, with a side helping of the complete unknown.
Software-defined network management solutions are a great option for higher education networks that need to manage not only connections from campus to campus but also a lot of remote connectivity.
Some benefits of SDN solutions include:
- Built-in optimisation
- Resilience with different carrier options
- A lot more insight into traffic
The two main solutions in the SDN space are SD-WAN and Secure Access Service Edge (SASE), depending on the requirements of your specific University each solution may be more suitable.
combines software-defined networking concepts with traditional WAN technology to improve traffic routing and network operations.
SD-WAN develops and acts as an overlay network - a network that is built on top of another network, gets support from that network's infrastructure and separates services from the underlying infrastructure.
SD-WAN's overlay is built on top of an organisation's existing WAN connections to improve how data travels across the network.
Secure Access Service Edge is an emerging architecture that provides an organisation's traditional network and security functionalities, but through a Cloud service that connects endpoints from anywhere. Rather than creating a centralised mass-VPN connection point at your datacentre before relaying traffic back out, SASE connects an endpoint straight to your dedicated "cloud domain" where traffic can be firewalled/inspected and sent directly to SAAS or on-prem apps much more efficiently.
By uniting an organisation's necessary network and security services -- firewall as a service, secure web gateways, etc. -- into one platform, SASE aims to simplify network and security management.
To conclude, let’s go full loop, when it comes to final grades, where does the onus lie?
Of course, a student cannot directly blame university IT for getting a poor final grade, if there is an issue, it can be reported, other methods of carrying out their study can be found and they can allocate more time where needed to their studies to ensure they complete their work to the best of their ability.
But is that where the question comes in here? ‘To the best of their ability'.
If their ability to learn is reliant on a high-performing, reliable IT network, but it is impeded by the points stated above, where does the blame lie?
If the onus is on the student to achieve the best grades possible, with the resources available to them, then surely the onus is on university IT teams to ensure that they have those resources, tools, and teaching ready and available when it is required.
Dr Glenn Morgan
Director of Professional Services
Here to help
We've got an hour for you
Ensuring the performance of systems and applications for such a demanding and dispersed userbase is a huge challenge.
We have helped universities tackle these challenges for over 15 years and we'd be happy to provide helpful advice.
Take advantage of an hour of free consultancy to get help with your cyber obstacles.
Fill out the form below and we'll be in touch shortly.
Doing right with insight
Instead of a free coffee, answers 5 questions about your role in Higher Education IT and we'll donate £5 to Latch Children's Charity.