MSP Partners

Terms of Service for MSP Partners

Protect your customers from fake and fraudulent sites targeting your brand. PhishFort's managed service finds and removes phishing websites, fraudulent social media content and fake or malicious apps.

Effective date: 26th June 2025
Last updated: 26th June 2025

Service Levels

  • The Supplier shall at all times during the term of the Agreement provide the Managed Services in accordance with standard industry practice to meet or exceed the Service Level Performance Measure for each service, as defined herein below.
  • The Supplier makes no guarantees about either the effectiveness of the takedown procedure or the time that this will take.

Performance Monitoring

  • The Supplier shall implement all measurement and monitoring tools and procedures necessary to measure, monitor, and report on the Supplier’s performance of the provision of the Managed Services against the applicable Service Levels in sufficient detail to verify compliance with the Service Levels.
  • The Supplier shall notify the Customer via [PARTNER] in reasonable time if the level of performance of the Supplier or any element of the provision by it of the Managed Services during the term of the Sales Agreement is likely to or fails to meet any Service Level Performance Measure.
Service Levels and Availability

‍‍

Service
Service Description
Key Performance Indicator
Service Level Performance Measure
Unmet Service Level Indicator
Website blocklisting
Blocklisting** identified malicious and/or phishing websites into Google Safe Browsing
Once the takedown is initiated, the infringing site is reported to all global blocklist partners.
Monthly median insertion time = 24 hours.
Minimum of ten websites blocklisted in a month period
Takedown*** of domains, apps, social media accounts, and copyright and/or trademark infringements
Malicious and/or phishing websites and/or Apps are taken down
0-15 working days spent on takedown
Takedown process commenced within 12 hours of identification/reporting
0-15 working days spent on takedown
Dashboard availability and maintenance
Dashboard for reporting and tracking takedowns
24/7365 days p/a
At least 99.9% availability in a month
Takedown communication
For a submitted takedown incident, correspondence with the reporter of the incident via in-portal communication or, if selected by the reporter, via email regarding the status of the incident.
The reportee will be updated on every action taken as new developments are received regarding the incident. Updates will be posted on our platform.
Communication threshold was met 99% of the time
Communication is late or missing on more than 1% of required intervals.

*Note: The Supplier undertakes to action a takedown as soon as possible within 12 hours of identification and/or reporting. Upon our last audit, the Supplier’s median takedown time for phishing cases with enough evidence is 6 hours. Given the number of third parties involved in the takedown process that are outside the Supplier’s control, the Supplier cannot make guarantees about either the effectiveness or the period of time that the takedown process will take.

**Google Safebrowsing powers most of the major browsers in the world, including Safari, Chrome, Edge, and Firefox. PhishFort is a trusted vendor of Safebrowsing which means we have direct access to the service. Getting a website blocklisted in Safebrowsing protects the overwhelming majority of internet users within minutes once accepted by the Google team. The takedown process happens in parallel to this process, ensuring that the site goes down for good. Given the nature of the jurisdictions and third parties involved with this process, we don't and cannot make realistic guarantees around the timeline of the takedown, but instead endeavor to get a site unreachable by most of the internet within 4 hours.

Service Credits:

In the event a Service Level is missed for a period of 2 months within a 12-month rolling period the following penalties will be applied:

Dashboard Availability – a percentage penalty is charged on an escalating scale as follows: if less than 99%, then a 1% penalty for that month's takedown fees, and if less than 95% THAN a penalty of 5% of that month’s fees, etc.

Takedown communication – a percentage penalty is charged on an escalating scale as follows: if less than 99%, then a 1% penalty for that month's takedown fees and if less than 95% than a penalty of 5% of that months fees, etc.

Details of Managed Services

  1. Website and Domain Services:
    1. Website and Domain Name Monitoring:
      1. The Supplier will conduct algorithmic scanning across internet resources and assess whether each Incident scanned poses a threat to the Customers brand. This assessment is conducted using a combination of machine learning based algorithms and manual intervention by a PhishFort analyst. This process occurs entirely on Computer Systems operated by the Supplier. Some examples of the data sources that are scanned include but are not limited to:
        1. Newly registered domains from popular Top Level Domains;
        2. Newly issued SSL certificates;
        3. Search engine data; and
        4. Threat intelligence feeds.
      2. Every Incident collected by the scanning is reviewed by the Supplier’s proprietary software, an analyst, or both by the software and an analyst. The incident is classified into one of three categories:
        1. Monitor: Incidents in monitor state are deemed to have the potential to become malicious at some point in the future, but do not appear to be malicious currently. Incidents that are classified into a Monitor state are regularly reassessed by automated systems and when a significant change is deemed to have occurred in the domain or website, the Incident is reassessed by an analyst and/or an automated algorithm.
        2. Safe: An Incident may be deemed to pose little or no harm to the Customer, in which case the incident is marked as “Safe” and no further action is conducted on this incident by the Supplier.
        3. Malicious: An Incident is considered malicious when it poses a credible threat to the Customer’s brand, business, or customers. An incident marked as malicious is generally blacklisted and a takedown procedure may be initiated according to an internal set of guidelines which take into account a number of factors including the nature of the Incident, the jurisdiction of the registrar of the domain, and the hosting provider of the website.
      3. Incidents that are moved into a Monitor or Malicious state are presented through the Supplier’s dashboard, available at the Suppliers website. The Customer can log into the dashboard and view the data that the Supplier has collected pertaining to the Customer.
      4. Given the volume of data processed by the Supplier and the open nature of the internet, it is not possible to identify every Incident against a customer. The Supplier endeavors to make the detection process as accurate and exhaustive as possible, but it is not possible to discover and identify every malicious Incident targeting the Customer.
    2. Website and Domain Blocklisting and Takedown
      1. Where the Supplier confirms an Incident is malicious and identifies phishing activities pursuant to the Services described under section 4 above, and has identified the Phisher to an extent that makes the following possible, the Supplier shall perform the following steps pursuant to blacklisting the Confirmed Incident within 48 hours following detection by the Supplier:
        1. Submit the Incident into blacklists owned by the Supplier which are made publicly available on the Supplier’s source code repository;
        2. Report the Incident to third party blacklists including those maintained by Google, Microsoft and Symantec;
        3. Report the Incident to the provider hosting the Incident, such as the domain registrar and hosting provider for websites, the relevant app store for apps, and social media platform for social media incidents. The responsiveness of the provider will differ on a case by case basis. Given the number of third parties involved in the process that are outside the Suppliers control, the Supplier cannot make guarantees about either the period of time that it will take or the likelihood of success.
    3. In cases in which the Supplier cannot for whatever reason blacklist a website, the Supplier may proceed to initiate a website takedown. This involves contacting the hosting provider of the website and notifying them that they are hosting a phishing website and requesting that they take it down. The hosting provider is legally obliged to remove content involved in illegal activity, but the responsiveness of the hosting provider differs on a case by case basis. Given the number of third parties involved in this process that are outside the Suppliers control, the Supplier cannot make guarantees about either the effectiveness or the period of time that this will take.
    4. The Supplier will endeavor to blacklist the identified malicious and/or phishing websites into Google Safe Browsing within the shortest time frame possible. The Supplier commits to ensuring that over the course of a month, the median time to insert a website into the Google Safe Browsing service will be 24 hours. In order for this condition to be triggered, there should be a minimum of ten websites that were blacklisted by the Supplier over the period of a month. In the event that at least ten websites were blacklisted over the period of a month and the median time to blacklist the websites is over 24 hours, the Supplier’s fee for the month will be halved. For example, if the fees owed to the Supplier for the service were $10 per month and the Supplier failed to meet a 24 hour median blacklist time, and there were at least 10 websites blacklisted by the Supplier over the course of the month, the fees owed to the Supplier for the month would be $5.
    1. App Store Monitoring:
      1. The Supplier will periodically search the Google Play Store and Apple App Store in order to identify apps that may exist on the store and which impersonate the Customer’s brand with the intent of harvesting customers private keys or credentials;
      2. Unofficial third party Android or iOS App Store Websites not maintained by Google or Apple are within the scope of the App Store Screening process; and
    2. App store Takedown:
      1. Mobile applications which are discovered on a store to be impersonating the Customer and which demonstrably intend to harvest user credentials, private keys, or attempt to sufficiently imitate the brand of the Customer will be reported to the respective store.
      2. The Google Play Store, Apple iOS store, and third party app stores are not managed or controlled by the Supplier and as such, the Supplier cannot make guarantees about either the effectiveness or the period of time that the takedown process will take.
      3. Given the number of third parties involved in this process that are outside the Suppliers control, the Supplier cannot make guarantees about either the effectiveness of the takedown procedure or the period of time that this will take.
    1. Social Media Services

      1. Social Media Monitoring
          1. The Supplier will conduct algorithmic and manually conducted scanning across several social media platforms in order to identify cybersecurity threats to the Customers brand. This assessment is conducted using a combination of machine learning based algorithms and manual intervention by a PhishFort analyst. This process occurs entirely on Computer Systems operated by the Supplier. Some examples of the data sources that are scanned include but are not limited to:
            1. Facebook;
            2. Twitter;
            3. LinkedIn; and
            4. YouTube
          2. The Supplier seeks to identify social media accounts that aim to cause damage to the Customer by looking to commit fraud against the Customer’s user base. In this case, fraud means:
          3. Phishing; and
            1. Taking actions to steal money, usernames and passwords, personal information, and/or information which should otherwise be kept private and confidential by the user.
          4. Given the independent nature of the platforms listed in the paragraph above, the Supplier cannot make any guarantees around the reliability of the detection of fraud, takedown efficacy and/or the continued monitoring of these platforms. For example, changes to a platform that could impact the Suppliers ability to provide the service include but are not limited to changes to the platform’s terms and conditions, changes to programmatic access to the platform through Application Programming Interfaces, or changes made to the search algorithm on the platform. These and other such changes may impact the Suppliers ability to monitor these platforms.
          5. Account impersonation involves a third party creating an account on a social media platform that impersonates the Customer or a Customer’s representative. When the Supplier discovers a case of account impersonation, the Supplier will attempt to initiate a takedown according to the procedure below.
      2. Social Takedown
          1. Social media takedown involves reporting a malicious account or user generated content to the platform that it was posted on. This process involves following the abuse report procedure outlined independently by each platform respectfully. In general, this involves providing evidence of the malicious activity conducted by the account or proof that the account has impersonated the identity of the Customer.
          2. Each platform has internal processes for handling abuse reports on their platform and acts in accordance with these. The Supplier will report malicious incidents discovered to the respective platform in line with their abuse reporting process. The amount of time taken for the respective platform to review and action these incidents varies and because this is beyond the control of the Supplier, the Supplier cannot make any guarantees around the timing of this process.
          3. In some cases, the social media platform may choose to reject the abuse report and decide not to take action on limiting the infringing account or content. In this case, the Supplier will contest this decision if possible. If the Supplier is unable to contest the social media platform’s decision, the Supplier may resubmit the abuse report.
          4. If the social media platform decides not to take action against the infringing account or content in question and the Supplier has attempted to contest or resubmit the abuse report, the infringing account or content in question may be deemed a Failed Takedown. In this case, the Supplier cannot take further action against the infringing account or content and has no influence over the internal processes conducted by the relevant social media platform. The Supplier is not responsible for any harm or damage caused to the Customer in these cases and the Customer understands that the content or account will remain active in these cases.

    1. Legal Takedowns
      1. In the course of providing the monitoring services detailed above, the Supplier may detect domains and/or websites and/or social media accounts which are infringing on the Customer’s trademark and/or copyright rights.
      2. The Supplier will act on behalf of the Customer and follow the relevant legislated processes for contacting and notifying the owner of the infringing site and/or the applicable host provider. The responsiveness of the host provider, infringer, and any other relevant third party differs on a case by case basis. Given the number of third parties involved in this process that are outside the Suppliers control, the Supplier cannot make guarantees about either the effectiveness or the period of time that it will take to have an incident finalized.
      3. The Supplier commits to addressing incidents of trademark and/or copyright infringement within 24 hours of becoming aware of any incidents.